00:00:00.002 Started by upstream project "autotest-nightly" build number 3787 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3167 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.056 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.057 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-cvl-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.075 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.105 Using shallow fetch with depth 1 00:00:00.105 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.105 > git --version # timeout=10 00:00:00.136 > git --version # 'git version 2.39.2' 00:00:00.136 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.170 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.170 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.380 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.391 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.403 Checking out Revision 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 (FETCH_HEAD) 00:00:04.403 > git config core.sparsecheckout # timeout=10 00:00:04.414 > git read-tree -mu HEAD # timeout=10 00:00:04.429 > git checkout -f 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=5 00:00:04.446 Commit message: "pool: fixes for VisualBuild class" 00:00:04.446 > git rev-list --no-walk 9bbc799d7020f50509d938dbe97dc05da0c1b5c3 # timeout=10 00:00:04.546 [Pipeline] Start of Pipeline 00:00:04.557 [Pipeline] library 00:00:04.558 Loading library shm_lib@master 00:00:04.558 Library shm_lib@master is cached. Copying from home. 00:00:04.572 [Pipeline] node 00:00:04.579 Running on WFP3 in /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:04.581 [Pipeline] { 00:00:04.593 [Pipeline] catchError 00:00:04.595 [Pipeline] { 00:00:04.606 [Pipeline] wrap 00:00:04.613 [Pipeline] { 00:00:04.621 [Pipeline] stage 00:00:04.623 [Pipeline] { (Prologue) 00:00:04.834 [Pipeline] sh 00:00:05.118 + logger -p user.info -t JENKINS-CI 00:00:05.133 [Pipeline] echo 00:00:05.134 Node: WFP3 00:00:05.139 [Pipeline] sh 00:00:05.432 [Pipeline] setCustomBuildProperty 00:00:05.444 [Pipeline] echo 00:00:05.446 Cleanup processes 00:00:05.451 [Pipeline] sh 00:00:05.733 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:05.733 1037461 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:05.746 [Pipeline] sh 00:00:06.029 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:06.030 ++ grep -v 'sudo pgrep' 00:00:06.030 ++ awk '{print $1}' 00:00:06.030 + sudo kill -9 00:00:06.030 + true 00:00:06.042 [Pipeline] cleanWs 00:00:06.052 [WS-CLEANUP] Deleting project workspace... 00:00:06.052 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.058 [WS-CLEANUP] done 00:00:06.063 [Pipeline] setCustomBuildProperty 00:00:06.075 [Pipeline] sh 00:00:06.352 + sudo git config --global --replace-all safe.directory '*' 00:00:06.415 [Pipeline] nodesByLabel 00:00:06.416 Found a total of 2 nodes with the 'sorcerer' label 00:00:06.424 [Pipeline] httpRequest 00:00:06.428 HttpMethod: GET 00:00:06.428 URL: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.435 Sending request to url: http://10.211.164.101/packages/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:06.447 Response Code: HTTP/1.1 200 OK 00:00:06.448 Success: Status code 200 is in the accepted range: 200,404 00:00:06.448 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.226 [Pipeline] sh 00:00:10.509 + tar --no-same-owner -xf jbp_9bbc799d7020f50509d938dbe97dc05da0c1b5c3.tar.gz 00:00:10.524 [Pipeline] httpRequest 00:00:10.528 HttpMethod: GET 00:00:10.529 URL: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:10.530 Sending request to url: http://10.211.164.101/packages/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:10.542 Response Code: HTTP/1.1 200 OK 00:00:10.543 Success: Status code 200 is in the accepted range: 200,404 00:00:10.544 Saving response body to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:40.747 [Pipeline] sh 00:00:41.028 + tar --no-same-owner -xf spdk_e55c9a81251968acc91e4d44169353be1987a3e4.tar.gz 00:00:43.572 [Pipeline] sh 00:00:43.862 + git -C spdk log --oneline -n5 00:00:43.862 e55c9a812 vbdev_error: decrement error_num atomically 00:00:43.862 f16e9f4d2 lib/event: framework_get_reactors supports getting pid and tid 00:00:43.862 2d610abe8 lib/env_dpdk: add spdk_get_tid function 00:00:43.862 f470a0dc6 event: do not call reactor events from spdk_thread context 00:00:43.862 8d3fdcaba nvmf: cleanup maximum number of subsystem namespace remanent code 00:00:43.909 [Pipeline] } 00:00:43.926 [Pipeline] // stage 00:00:43.936 [Pipeline] stage 00:00:43.938 [Pipeline] { (Prepare) 00:00:43.957 [Pipeline] writeFile 00:00:43.977 [Pipeline] sh 00:00:44.262 + logger -p user.info -t JENKINS-CI 00:00:44.274 [Pipeline] sh 00:00:44.556 + logger -p user.info -t JENKINS-CI 00:00:44.568 [Pipeline] sh 00:00:44.850 + cat autorun-spdk.conf 00:00:44.850 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.850 SPDK_TEST_NVMF=1 00:00:44.850 SPDK_TEST_NVME_CLI=1 00:00:44.850 SPDK_TEST_NVMF_TRANSPORT=rdma 00:00:44.850 SPDK_TEST_NVMF_NICS=e810 00:00:44.850 SPDK_RUN_UBSAN=1 00:00:44.850 NET_TYPE=phy 00:00:44.856 RUN_NIGHTLY=1 00:00:44.861 [Pipeline] readFile 00:00:44.885 [Pipeline] withEnv 00:00:44.887 [Pipeline] { 00:00:44.900 [Pipeline] sh 00:00:45.180 + set -ex 00:00:45.181 + [[ -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf ]] 00:00:45.181 + source /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:00:45.181 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.181 ++ SPDK_TEST_NVMF=1 00:00:45.181 ++ SPDK_TEST_NVME_CLI=1 00:00:45.181 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:00:45.181 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.181 ++ SPDK_RUN_UBSAN=1 00:00:45.181 ++ NET_TYPE=phy 00:00:45.181 ++ RUN_NIGHTLY=1 00:00:45.181 + case $SPDK_TEST_NVMF_NICS in 00:00:45.181 + DRIVERS=ice 00:00:45.181 + [[ rdma == \r\d\m\a ]] 00:00:45.181 + DRIVERS+=' irdma' 00:00:45.181 + [[ -n ice irdma ]] 00:00:45.181 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.181 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.181 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.181 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.181 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.181 + true 00:00:45.181 + for D in $DRIVERS 00:00:45.181 + sudo modprobe ice 00:00:45.181 + for D in $DRIVERS 00:00:45.181 + sudo modprobe irdma 00:00:45.440 + exit 0 00:00:45.449 [Pipeline] } 00:00:45.470 [Pipeline] // withEnv 00:00:45.475 [Pipeline] } 00:00:45.492 [Pipeline] // stage 00:00:45.502 [Pipeline] catchError 00:00:45.503 [Pipeline] { 00:00:45.526 [Pipeline] timeout 00:00:45.526 Timeout set to expire in 40 min 00:00:45.528 [Pipeline] { 00:00:45.543 [Pipeline] stage 00:00:45.545 [Pipeline] { (Tests) 00:00:45.561 [Pipeline] sh 00:00:45.844 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:45.845 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:45.845 + DIR_ROOT=/var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:45.845 + [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest ]] 00:00:45.845 + DIR_SPDK=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:45.845 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:00:45.845 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk ]] 00:00:45.845 + [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:00:45.845 + mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/output 00:00:45.845 + [[ -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/output ]] 00:00:45.845 + [[ nvmf-cvl-phy-autotest == pkgdep-* ]] 00:00:45.845 + cd /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:00:45.845 + source /etc/os-release 00:00:45.845 ++ NAME='Fedora Linux' 00:00:45.845 ++ VERSION='38 (Cloud Edition)' 00:00:45.845 ++ ID=fedora 00:00:45.845 ++ VERSION_ID=38 00:00:45.845 ++ VERSION_CODENAME= 00:00:45.845 ++ PLATFORM_ID=platform:f38 00:00:45.845 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:45.845 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:45.845 ++ LOGO=fedora-logo-icon 00:00:45.845 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:45.845 ++ HOME_URL=https://fedoraproject.org/ 00:00:45.845 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:45.845 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:45.845 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:45.845 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:45.845 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:45.845 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:45.845 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:45.845 ++ SUPPORT_END=2024-05-14 00:00:45.845 ++ VARIANT='Cloud Edition' 00:00:45.845 ++ VARIANT_ID=cloud 00:00:45.845 + uname -a 00:00:45.845 Linux spdk-wfp-03 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 02:47:10 UTC 2024 x86_64 GNU/Linux 00:00:45.845 + sudo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:00:48.377 Hugepages 00:00:48.377 node hugesize free / total 00:00:48.377 node0 1048576kB 0 / 0 00:00:48.377 node0 2048kB 0 / 0 00:00:48.377 node1 1048576kB 0 / 0 00:00:48.377 node1 2048kB 0 / 0 00:00:48.377 00:00:48.377 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:48.377 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:48.377 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:48.377 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:48.377 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:00:48.377 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:48.377 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:48.377 + rm -f /tmp/spdk-ld-path 00:00:48.377 + source autorun-spdk.conf 00:00:48.377 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.377 ++ SPDK_TEST_NVMF=1 00:00:48.377 ++ SPDK_TEST_NVME_CLI=1 00:00:48.377 ++ SPDK_TEST_NVMF_TRANSPORT=rdma 00:00:48.377 ++ SPDK_TEST_NVMF_NICS=e810 00:00:48.377 ++ SPDK_RUN_UBSAN=1 00:00:48.377 ++ NET_TYPE=phy 00:00:48.377 ++ RUN_NIGHTLY=1 00:00:48.377 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:48.377 + [[ -n '' ]] 00:00:48.377 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:48.377 + for M in /var/spdk/build-*-manifest.txt 00:00:48.377 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:48.377 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:00:48.377 + for M in /var/spdk/build-*-manifest.txt 00:00:48.377 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:48.377 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/output/ 00:00:48.377 ++ uname 00:00:48.377 + [[ Linux == \L\i\n\u\x ]] 00:00:48.377 + sudo dmesg -T 00:00:48.635 + sudo dmesg --clear 00:00:48.635 + dmesg_pid=1038961 00:00:48.635 + [[ Fedora Linux == FreeBSD ]] 00:00:48.635 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.635 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:48.635 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:48.635 + [[ -x /usr/src/fio-static/fio ]] 00:00:48.635 + export FIO_BIN=/usr/src/fio-static/fio 00:00:48.635 + sudo dmesg -Tw 00:00:48.635 + FIO_BIN=/usr/src/fio-static/fio 00:00:48.635 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\c\v\l\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:48.635 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:48.635 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:48.635 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.635 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:48.635 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:48.635 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.635 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:48.635 + spdk/autorun.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf 00:00:48.635 Test configuration: 00:00:48.635 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.635 SPDK_TEST_NVMF=1 00:00:48.635 SPDK_TEST_NVME_CLI=1 00:00:48.635 SPDK_TEST_NVMF_TRANSPORT=rdma 00:00:48.635 SPDK_TEST_NVMF_NICS=e810 00:00:48.635 SPDK_RUN_UBSAN=1 00:00:48.635 NET_TYPE=phy 00:00:48.635 RUN_NIGHTLY=1 08:39:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:00:48.635 08:39:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:48.635 08:39:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:48.635 08:39:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:48.635 08:39:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.635 08:39:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.636 08:39:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.636 08:39:11 -- paths/export.sh@5 -- $ export PATH 00:00:48.636 08:39:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:48.636 08:39:11 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:00:48.636 08:39:11 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:48.636 08:39:11 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1717915151.XXXXXX 00:00:48.636 08:39:11 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1717915151.PK1JIJ 00:00:48.636 08:39:11 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:48.636 08:39:11 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:48.636 08:39:11 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/' 00:00:48.636 08:39:11 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:48.636 08:39:11 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:48.636 08:39:11 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:48.636 08:39:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:48.636 08:39:11 -- common/autotest_common.sh@10 -- $ set +x 00:00:48.636 08:39:11 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:48.636 08:39:11 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:48.636 08:39:11 -- pm/common@17 -- $ local monitor 00:00:48.636 08:39:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.636 08:39:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.636 08:39:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.636 08:39:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:48.636 08:39:11 -- pm/common@21 -- $ date +%s 00:00:48.636 08:39:11 -- pm/common@25 -- $ sleep 1 00:00:48.636 08:39:11 -- pm/common@21 -- $ date +%s 00:00:48.636 08:39:11 -- pm/common@21 -- $ date +%s 00:00:48.636 08:39:11 -- pm/common@21 -- $ date +%s 00:00:48.636 08:39:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915151 00:00:48.636 08:39:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915151 00:00:48.636 08:39:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915151 00:00:48.636 08:39:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1717915151 00:00:48.636 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915151_collect-cpu-load.pm.log 00:00:48.636 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915151_collect-vmstat.pm.log 00:00:48.636 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915151_collect-cpu-temp.pm.log 00:00:48.636 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1717915151_collect-bmc-pm.bmc.pm.log 00:00:49.571 08:39:12 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:49.571 08:39:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:49.571 08:39:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:49.571 08:39:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:00:49.571 08:39:12 -- spdk/autobuild.sh@16 -- $ date -u 00:00:49.571 Sun Jun 9 06:39:12 AM UTC 2024 00:00:49.571 08:39:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:49.571 v24.09-pre-53-ge55c9a812 00:00:49.571 08:39:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:49.571 08:39:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:49.571 08:39:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:49.571 08:39:12 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:00:49.571 08:39:12 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:00:49.571 08:39:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.571 ************************************ 00:00:49.571 START TEST ubsan 00:00:49.571 ************************************ 00:00:49.571 08:39:12 ubsan -- common/autotest_common.sh@1124 -- $ echo 'using ubsan' 00:00:49.571 using ubsan 00:00:49.571 00:00:49.571 real 0m0.000s 00:00:49.571 user 0m0.000s 00:00:49.571 sys 0m0.000s 00:00:49.571 08:39:12 ubsan -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:00:49.571 08:39:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:49.571 ************************************ 00:00:49.571 END TEST ubsan 00:00:49.571 ************************************ 00:00:49.828 08:39:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:49.828 08:39:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:49.828 08:39:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:49.828 08:39:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:49.828 Using default SPDK env in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:00:49.828 Using default DPDK in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:00:50.086 Using 'verbs' RDMA provider 00:01:03.215 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:15.416 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:15.416 Creating mk/config.mk...done. 00:01:15.416 Creating mk/cc.flags.mk...done. 00:01:15.416 Type 'make' to build. 00:01:15.416 08:39:36 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:15.416 08:39:36 -- common/autotest_common.sh@1100 -- $ '[' 3 -le 1 ']' 00:01:15.416 08:39:36 -- common/autotest_common.sh@1106 -- $ xtrace_disable 00:01:15.416 08:39:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.416 ************************************ 00:01:15.416 START TEST make 00:01:15.416 ************************************ 00:01:15.416 08:39:36 make -- common/autotest_common.sh@1124 -- $ make -j96 00:01:15.416 make[1]: Nothing to be done for 'all'. 00:01:22.038 The Meson build system 00:01:22.038 Version: 1.3.1 00:01:22.038 Source dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk 00:01:22.038 Build dir: /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp 00:01:22.038 Build type: native build 00:01:22.038 Program cat found: YES (/usr/bin/cat) 00:01:22.038 Project name: DPDK 00:01:22.038 Project version: 24.03.0 00:01:22.038 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.038 C linker for the host machine: cc ld.bfd 2.39-16 00:01:22.038 Host machine cpu family: x86_64 00:01:22.038 Host machine cpu: x86_64 00:01:22.038 Message: ## Building in Developer Mode ## 00:01:22.038 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.039 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:22.039 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.039 Program python3 found: YES (/usr/bin/python3) 00:01:22.039 Program cat found: YES (/usr/bin/cat) 00:01:22.039 Compiler for C supports arguments -march=native: YES 00:01:22.039 Checking for size of "void *" : 8 00:01:22.039 Checking for size of "void *" : 8 (cached) 00:01:22.039 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:22.039 Library m found: YES 00:01:22.039 Library numa found: YES 00:01:22.039 Has header "numaif.h" : YES 00:01:22.039 Library fdt found: NO 00:01:22.039 Library execinfo found: NO 00:01:22.039 Has header "execinfo.h" : YES 00:01:22.039 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.039 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.039 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.039 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.039 Run-time dependency openssl found: YES 3.0.9 00:01:22.039 Run-time dependency libpcap found: YES 1.10.4 00:01:22.039 Has header "pcap.h" with dependency libpcap: YES 00:01:22.039 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.039 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.039 Compiler for C supports arguments -Wformat: YES 00:01:22.039 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.039 Compiler for C supports arguments -Wformat-security: NO 00:01:22.039 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.039 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.039 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.039 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.039 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.039 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.039 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.039 Compiler for C supports arguments -Wundef: YES 00:01:22.039 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.039 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.039 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.039 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.039 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.039 Program objdump found: YES (/usr/bin/objdump) 00:01:22.039 Compiler for C supports arguments -mavx512f: YES 00:01:22.039 Checking if "AVX512 checking" compiles: YES 00:01:22.039 Fetching value of define "__SSE4_2__" : 1 00:01:22.039 Fetching value of define "__AES__" : 1 00:01:22.039 Fetching value of define "__AVX__" : 1 00:01:22.039 Fetching value of define "__AVX2__" : 1 00:01:22.039 Fetching value of define "__AVX512BW__" : 1 00:01:22.039 Fetching value of define "__AVX512CD__" : 1 00:01:22.039 Fetching value of define "__AVX512DQ__" : 1 00:01:22.039 Fetching value of define "__AVX512F__" : 1 00:01:22.039 Fetching value of define "__AVX512VL__" : 1 00:01:22.039 Fetching value of define "__PCLMUL__" : 1 00:01:22.039 Fetching value of define "__RDRND__" : 1 00:01:22.039 Fetching value of define "__RDSEED__" : 1 00:01:22.039 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:22.039 Fetching value of define "__znver1__" : (undefined) 00:01:22.039 Fetching value of define "__znver2__" : (undefined) 00:01:22.039 Fetching value of define "__znver3__" : (undefined) 00:01:22.039 Fetching value of define "__znver4__" : (undefined) 00:01:22.039 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.039 Message: lib/log: Defining dependency "log" 00:01:22.039 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.039 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.039 Checking for function "getentropy" : NO 00:01:22.039 Message: lib/eal: Defining dependency "eal" 00:01:22.039 Message: lib/ring: Defining dependency "ring" 00:01:22.039 Message: lib/rcu: Defining dependency "rcu" 00:01:22.039 Message: lib/mempool: Defining dependency "mempool" 00:01:22.039 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.039 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.039 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:22.039 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:22.039 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:22.039 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:22.039 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:22.039 Compiler for C supports arguments -mpclmul: YES 00:01:22.039 Compiler for C supports arguments -maes: YES 00:01:22.039 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.039 Compiler for C supports arguments -mavx512bw: YES 00:01:22.039 Compiler for C supports arguments -mavx512dq: YES 00:01:22.039 Compiler for C supports arguments -mavx512vl: YES 00:01:22.039 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.039 Compiler for C supports arguments -mavx2: YES 00:01:22.039 Compiler for C supports arguments -mavx: YES 00:01:22.039 Message: lib/net: Defining dependency "net" 00:01:22.039 Message: lib/meter: Defining dependency "meter" 00:01:22.039 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.039 Message: lib/pci: Defining dependency "pci" 00:01:22.039 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.039 Message: lib/hash: Defining dependency "hash" 00:01:22.039 Message: lib/timer: Defining dependency "timer" 00:01:22.039 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.039 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.039 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.039 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.039 Message: lib/power: Defining dependency "power" 00:01:22.039 Message: lib/reorder: Defining dependency "reorder" 00:01:22.039 Message: lib/security: Defining dependency "security" 00:01:22.039 Has header "linux/userfaultfd.h" : YES 00:01:22.039 Has header "linux/vduse.h" : YES 00:01:22.039 Message: lib/vhost: Defining dependency "vhost" 00:01:22.039 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.039 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.039 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.039 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.039 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:22.039 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:22.039 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:22.039 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:22.039 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:22.039 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:22.039 Program doxygen found: YES (/usr/bin/doxygen) 00:01:22.039 Configuring doxy-api-html.conf using configuration 00:01:22.039 Configuring doxy-api-man.conf using configuration 00:01:22.039 Program mandb found: YES (/usr/bin/mandb) 00:01:22.039 Program sphinx-build found: NO 00:01:22.039 Configuring rte_build_config.h using configuration 00:01:22.039 Message: 00:01:22.039 ================= 00:01:22.039 Applications Enabled 00:01:22.039 ================= 00:01:22.039 00:01:22.039 apps: 00:01:22.039 00:01:22.039 00:01:22.039 Message: 00:01:22.039 ================= 00:01:22.039 Libraries Enabled 00:01:22.039 ================= 00:01:22.039 00:01:22.039 libs: 00:01:22.039 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:22.039 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:22.039 cryptodev, dmadev, power, reorder, security, vhost, 00:01:22.039 00:01:22.039 Message: 00:01:22.039 =============== 00:01:22.039 Drivers Enabled 00:01:22.039 =============== 00:01:22.039 00:01:22.039 common: 00:01:22.039 00:01:22.039 bus: 00:01:22.039 pci, vdev, 00:01:22.039 mempool: 00:01:22.039 ring, 00:01:22.039 dma: 00:01:22.039 00:01:22.039 net: 00:01:22.039 00:01:22.039 crypto: 00:01:22.039 00:01:22.039 compress: 00:01:22.039 00:01:22.039 vdpa: 00:01:22.039 00:01:22.039 00:01:22.039 Message: 00:01:22.039 ================= 00:01:22.039 Content Skipped 00:01:22.039 ================= 00:01:22.039 00:01:22.039 apps: 00:01:22.039 dumpcap: explicitly disabled via build config 00:01:22.039 graph: explicitly disabled via build config 00:01:22.039 pdump: explicitly disabled via build config 00:01:22.039 proc-info: explicitly disabled via build config 00:01:22.039 test-acl: explicitly disabled via build config 00:01:22.039 test-bbdev: explicitly disabled via build config 00:01:22.039 test-cmdline: explicitly disabled via build config 00:01:22.039 test-compress-perf: explicitly disabled via build config 00:01:22.039 test-crypto-perf: explicitly disabled via build config 00:01:22.039 test-dma-perf: explicitly disabled via build config 00:01:22.039 test-eventdev: explicitly disabled via build config 00:01:22.039 test-fib: explicitly disabled via build config 00:01:22.039 test-flow-perf: explicitly disabled via build config 00:01:22.039 test-gpudev: explicitly disabled via build config 00:01:22.039 test-mldev: explicitly disabled via build config 00:01:22.039 test-pipeline: explicitly disabled via build config 00:01:22.039 test-pmd: explicitly disabled via build config 00:01:22.039 test-regex: explicitly disabled via build config 00:01:22.039 test-sad: explicitly disabled via build config 00:01:22.039 test-security-perf: explicitly disabled via build config 00:01:22.039 00:01:22.039 libs: 00:01:22.039 argparse: explicitly disabled via build config 00:01:22.039 metrics: explicitly disabled via build config 00:01:22.039 acl: explicitly disabled via build config 00:01:22.039 bbdev: explicitly disabled via build config 00:01:22.039 bitratestats: explicitly disabled via build config 00:01:22.039 bpf: explicitly disabled via build config 00:01:22.039 cfgfile: explicitly disabled via build config 00:01:22.040 distributor: explicitly disabled via build config 00:01:22.040 efd: explicitly disabled via build config 00:01:22.040 eventdev: explicitly disabled via build config 00:01:22.040 dispatcher: explicitly disabled via build config 00:01:22.040 gpudev: explicitly disabled via build config 00:01:22.040 gro: explicitly disabled via build config 00:01:22.040 gso: explicitly disabled via build config 00:01:22.040 ip_frag: explicitly disabled via build config 00:01:22.040 jobstats: explicitly disabled via build config 00:01:22.040 latencystats: explicitly disabled via build config 00:01:22.040 lpm: explicitly disabled via build config 00:01:22.040 member: explicitly disabled via build config 00:01:22.040 pcapng: explicitly disabled via build config 00:01:22.040 rawdev: explicitly disabled via build config 00:01:22.040 regexdev: explicitly disabled via build config 00:01:22.040 mldev: explicitly disabled via build config 00:01:22.040 rib: explicitly disabled via build config 00:01:22.040 sched: explicitly disabled via build config 00:01:22.040 stack: explicitly disabled via build config 00:01:22.040 ipsec: explicitly disabled via build config 00:01:22.040 pdcp: explicitly disabled via build config 00:01:22.040 fib: explicitly disabled via build config 00:01:22.040 port: explicitly disabled via build config 00:01:22.040 pdump: explicitly disabled via build config 00:01:22.040 table: explicitly disabled via build config 00:01:22.040 pipeline: explicitly disabled via build config 00:01:22.040 graph: explicitly disabled via build config 00:01:22.040 node: explicitly disabled via build config 00:01:22.040 00:01:22.040 drivers: 00:01:22.040 common/cpt: not in enabled drivers build config 00:01:22.040 common/dpaax: not in enabled drivers build config 00:01:22.040 common/iavf: not in enabled drivers build config 00:01:22.040 common/idpf: not in enabled drivers build config 00:01:22.040 common/ionic: not in enabled drivers build config 00:01:22.040 common/mvep: not in enabled drivers build config 00:01:22.040 common/octeontx: not in enabled drivers build config 00:01:22.040 bus/auxiliary: not in enabled drivers build config 00:01:22.040 bus/cdx: not in enabled drivers build config 00:01:22.040 bus/dpaa: not in enabled drivers build config 00:01:22.040 bus/fslmc: not in enabled drivers build config 00:01:22.040 bus/ifpga: not in enabled drivers build config 00:01:22.040 bus/platform: not in enabled drivers build config 00:01:22.040 bus/uacce: not in enabled drivers build config 00:01:22.040 bus/vmbus: not in enabled drivers build config 00:01:22.040 common/cnxk: not in enabled drivers build config 00:01:22.040 common/mlx5: not in enabled drivers build config 00:01:22.040 common/nfp: not in enabled drivers build config 00:01:22.040 common/nitrox: not in enabled drivers build config 00:01:22.040 common/qat: not in enabled drivers build config 00:01:22.040 common/sfc_efx: not in enabled drivers build config 00:01:22.040 mempool/bucket: not in enabled drivers build config 00:01:22.040 mempool/cnxk: not in enabled drivers build config 00:01:22.040 mempool/dpaa: not in enabled drivers build config 00:01:22.040 mempool/dpaa2: not in enabled drivers build config 00:01:22.040 mempool/octeontx: not in enabled drivers build config 00:01:22.040 mempool/stack: not in enabled drivers build config 00:01:22.040 dma/cnxk: not in enabled drivers build config 00:01:22.040 dma/dpaa: not in enabled drivers build config 00:01:22.040 dma/dpaa2: not in enabled drivers build config 00:01:22.040 dma/hisilicon: not in enabled drivers build config 00:01:22.040 dma/idxd: not in enabled drivers build config 00:01:22.040 dma/ioat: not in enabled drivers build config 00:01:22.040 dma/skeleton: not in enabled drivers build config 00:01:22.040 net/af_packet: not in enabled drivers build config 00:01:22.040 net/af_xdp: not in enabled drivers build config 00:01:22.040 net/ark: not in enabled drivers build config 00:01:22.040 net/atlantic: not in enabled drivers build config 00:01:22.040 net/avp: not in enabled drivers build config 00:01:22.040 net/axgbe: not in enabled drivers build config 00:01:22.040 net/bnx2x: not in enabled drivers build config 00:01:22.040 net/bnxt: not in enabled drivers build config 00:01:22.040 net/bonding: not in enabled drivers build config 00:01:22.040 net/cnxk: not in enabled drivers build config 00:01:22.040 net/cpfl: not in enabled drivers build config 00:01:22.040 net/cxgbe: not in enabled drivers build config 00:01:22.040 net/dpaa: not in enabled drivers build config 00:01:22.040 net/dpaa2: not in enabled drivers build config 00:01:22.040 net/e1000: not in enabled drivers build config 00:01:22.040 net/ena: not in enabled drivers build config 00:01:22.040 net/enetc: not in enabled drivers build config 00:01:22.040 net/enetfec: not in enabled drivers build config 00:01:22.040 net/enic: not in enabled drivers build config 00:01:22.040 net/failsafe: not in enabled drivers build config 00:01:22.040 net/fm10k: not in enabled drivers build config 00:01:22.040 net/gve: not in enabled drivers build config 00:01:22.040 net/hinic: not in enabled drivers build config 00:01:22.040 net/hns3: not in enabled drivers build config 00:01:22.040 net/i40e: not in enabled drivers build config 00:01:22.040 net/iavf: not in enabled drivers build config 00:01:22.040 net/ice: not in enabled drivers build config 00:01:22.040 net/idpf: not in enabled drivers build config 00:01:22.040 net/igc: not in enabled drivers build config 00:01:22.040 net/ionic: not in enabled drivers build config 00:01:22.040 net/ipn3ke: not in enabled drivers build config 00:01:22.040 net/ixgbe: not in enabled drivers build config 00:01:22.040 net/mana: not in enabled drivers build config 00:01:22.040 net/memif: not in enabled drivers build config 00:01:22.040 net/mlx4: not in enabled drivers build config 00:01:22.040 net/mlx5: not in enabled drivers build config 00:01:22.040 net/mvneta: not in enabled drivers build config 00:01:22.040 net/mvpp2: not in enabled drivers build config 00:01:22.040 net/netvsc: not in enabled drivers build config 00:01:22.040 net/nfb: not in enabled drivers build config 00:01:22.040 net/nfp: not in enabled drivers build config 00:01:22.040 net/ngbe: not in enabled drivers build config 00:01:22.040 net/null: not in enabled drivers build config 00:01:22.040 net/octeontx: not in enabled drivers build config 00:01:22.040 net/octeon_ep: not in enabled drivers build config 00:01:22.040 net/pcap: not in enabled drivers build config 00:01:22.040 net/pfe: not in enabled drivers build config 00:01:22.040 net/qede: not in enabled drivers build config 00:01:22.040 net/ring: not in enabled drivers build config 00:01:22.040 net/sfc: not in enabled drivers build config 00:01:22.040 net/softnic: not in enabled drivers build config 00:01:22.040 net/tap: not in enabled drivers build config 00:01:22.040 net/thunderx: not in enabled drivers build config 00:01:22.040 net/txgbe: not in enabled drivers build config 00:01:22.040 net/vdev_netvsc: not in enabled drivers build config 00:01:22.040 net/vhost: not in enabled drivers build config 00:01:22.040 net/virtio: not in enabled drivers build config 00:01:22.040 net/vmxnet3: not in enabled drivers build config 00:01:22.040 raw/*: missing internal dependency, "rawdev" 00:01:22.040 crypto/armv8: not in enabled drivers build config 00:01:22.040 crypto/bcmfs: not in enabled drivers build config 00:01:22.040 crypto/caam_jr: not in enabled drivers build config 00:01:22.040 crypto/ccp: not in enabled drivers build config 00:01:22.040 crypto/cnxk: not in enabled drivers build config 00:01:22.040 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.040 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.040 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.040 crypto/mlx5: not in enabled drivers build config 00:01:22.040 crypto/mvsam: not in enabled drivers build config 00:01:22.040 crypto/nitrox: not in enabled drivers build config 00:01:22.040 crypto/null: not in enabled drivers build config 00:01:22.040 crypto/octeontx: not in enabled drivers build config 00:01:22.040 crypto/openssl: not in enabled drivers build config 00:01:22.040 crypto/scheduler: not in enabled drivers build config 00:01:22.040 crypto/uadk: not in enabled drivers build config 00:01:22.040 crypto/virtio: not in enabled drivers build config 00:01:22.040 compress/isal: not in enabled drivers build config 00:01:22.040 compress/mlx5: not in enabled drivers build config 00:01:22.040 compress/nitrox: not in enabled drivers build config 00:01:22.040 compress/octeontx: not in enabled drivers build config 00:01:22.040 compress/zlib: not in enabled drivers build config 00:01:22.040 regex/*: missing internal dependency, "regexdev" 00:01:22.040 ml/*: missing internal dependency, "mldev" 00:01:22.040 vdpa/ifc: not in enabled drivers build config 00:01:22.040 vdpa/mlx5: not in enabled drivers build config 00:01:22.040 vdpa/nfp: not in enabled drivers build config 00:01:22.040 vdpa/sfc: not in enabled drivers build config 00:01:22.040 event/*: missing internal dependency, "eventdev" 00:01:22.040 baseband/*: missing internal dependency, "bbdev" 00:01:22.040 gpu/*: missing internal dependency, "gpudev" 00:01:22.040 00:01:22.040 00:01:22.040 Build targets in project: 85 00:01:22.040 00:01:22.040 DPDK 24.03.0 00:01:22.040 00:01:22.040 User defined options 00:01:22.040 buildtype : debug 00:01:22.040 default_library : shared 00:01:22.040 libdir : lib 00:01:22.040 prefix : /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:01:22.040 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:22.040 c_link_args : 00:01:22.040 cpu_instruction_set: native 00:01:22.040 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:22.040 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,argparse,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:22.040 enable_docs : false 00:01:22.040 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:22.040 enable_kmods : false 00:01:22.040 tests : false 00:01:22.040 00:01:22.040 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:22.615 ninja: Entering directory `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp' 00:01:22.615 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:22.615 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:22.615 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:22.615 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:22.615 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:22.615 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:22.615 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:22.615 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:22.615 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.615 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:22.615 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:22.615 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.615 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.615 [14/268] Linking static target lib/librte_kvargs.a 00:01:22.615 [15/268] Linking static target lib/librte_log.a 00:01:22.615 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:22.615 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.615 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.615 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.876 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:22.876 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:22.876 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:22.876 [23/268] Linking static target lib/librte_pci.a 00:01:22.876 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:22.876 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:22.876 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:23.137 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:23.137 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:23.137 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:23.137 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:23.137 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:23.137 [32/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:23.137 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.137 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:23.137 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:23.137 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.137 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:23.137 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:23.137 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.137 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.137 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.137 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.137 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:23.137 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:23.137 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:23.137 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.137 [47/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:23.137 [48/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:23.137 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:23.137 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.137 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.137 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.137 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.137 [54/268] Linking static target lib/librte_meter.a 00:01:23.137 [55/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:23.137 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:23.137 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.137 [58/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:23.137 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:23.137 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:23.137 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:23.137 [62/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:23.137 [63/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:23.137 [64/268] Linking static target lib/librte_telemetry.a 00:01:23.137 [65/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:23.137 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.137 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.137 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:23.137 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:23.137 [70/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:23.137 [71/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:23.137 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.137 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.137 [74/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:23.137 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:23.137 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:23.138 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:23.138 [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:23.138 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.138 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.138 [81/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:23.138 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.138 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:23.138 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:23.138 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:23.138 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:23.138 [87/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:23.138 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.138 [89/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:23.397 [90/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.397 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:23.397 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.397 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:23.397 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.397 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:23.397 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:23.397 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:23.397 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.397 [99/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:23.397 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.397 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:23.397 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:23.397 [103/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:23.397 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.397 [105/268] Linking static target lib/librte_ring.a 00:01:23.397 [106/268] Linking static target lib/librte_rcu.a 00:01:23.397 [107/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:23.397 [108/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:23.397 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:23.397 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.397 [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.397 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:23.397 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:23.397 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:23.397 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:23.397 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:23.397 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:23.397 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:23.397 [119/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.397 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:23.397 [121/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.397 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:23.397 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:23.397 [124/268] Linking static target lib/librte_net.a 00:01:23.397 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:23.397 [126/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.397 [127/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:23.397 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:23.397 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:23.397 [130/268] Linking static target lib/librte_mempool.a 00:01:23.397 [131/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.397 [132/268] Linking static target lib/librte_cmdline.a 00:01:23.397 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:23.397 [134/268] Linking target lib/librte_log.so.24.1 00:01:23.397 [135/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:23.397 [136/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:23.397 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:23.397 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:23.397 [139/268] Linking static target lib/librte_eal.a 00:01:23.656 [140/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:23.656 [141/268] Linking static target lib/librte_timer.a 00:01:23.656 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:23.656 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:23.656 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.656 [145/268] Linking static target lib/librte_mbuf.a 00:01:23.656 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:23.656 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:23.656 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.656 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:23.656 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:23.656 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:23.656 [152/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:23.656 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:23.656 [154/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.656 [155/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:23.656 [156/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:23.656 [157/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:23.656 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:23.656 [159/268] Linking target lib/librte_kvargs.so.24.1 00:01:23.656 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.656 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:23.656 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:23.656 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:23.656 [164/268] Linking target lib/librte_telemetry.so.24.1 00:01:23.656 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:23.656 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:23.656 [167/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:23.656 [168/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:23.656 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:23.656 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:23.656 [171/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:23.656 [172/268] Linking static target lib/librte_compressdev.a 00:01:23.656 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:23.656 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:23.656 [175/268] Linking static target lib/librte_dmadev.a 00:01:23.656 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:23.656 [177/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:23.656 [178/268] Linking static target lib/librte_reorder.a 00:01:23.656 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:23.656 [180/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:23.915 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:23.915 [182/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:23.915 [183/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:23.915 [184/268] Linking static target lib/librte_power.a 00:01:23.915 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:23.915 [186/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:23.915 [187/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:23.915 [188/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:23.915 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:23.915 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:23.915 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:23.915 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:23.915 [193/268] Linking static target lib/librte_security.a 00:01:23.915 [194/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.915 [195/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.915 [196/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:23.915 [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:23.915 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:23.915 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:23.915 [200/268] Linking static target lib/librte_cryptodev.a 00:01:23.915 [201/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:23.915 [202/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.915 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:23.915 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:23.915 [205/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:23.915 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:23.915 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:23.915 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:23.915 [209/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:23.915 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:23.915 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:23.915 [212/268] Linking static target lib/librte_hash.a 00:01:23.915 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:24.174 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.174 [215/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.174 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:24.433 [221/268] Linking static target lib/librte_ethdev.a 00:01:24.433 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.433 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:24.692 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.692 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.950 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.887 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.887 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:25.887 [230/268] Linking static target lib/librte_vhost.a 00:01:27.786 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.975 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.912 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.912 [234/268] Linking target lib/librte_eal.so.24.1 00:01:33.172 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:33.172 [236/268] Linking target lib/librte_timer.so.24.1 00:01:33.172 [237/268] Linking target lib/librte_pci.so.24.1 00:01:33.172 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:33.172 [239/268] Linking target lib/librte_ring.so.24.1 00:01:33.172 [240/268] Linking target lib/librte_meter.so.24.1 00:01:33.172 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:33.172 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:33.172 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:33.172 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:33.172 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:33.172 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:33.172 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:33.172 [248/268] Linking target lib/librte_rcu.so.24.1 00:01:33.172 [249/268] Linking target lib/librte_mempool.so.24.1 00:01:33.430 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:33.431 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:33.431 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:33.431 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:33.689 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:33.689 [255/268] Linking target lib/librte_reorder.so.24.1 00:01:33.689 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:33.689 [257/268] Linking target lib/librte_net.so.24.1 00:01:33.689 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:33.689 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:33.689 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:33.689 [261/268] Linking target lib/librte_cmdline.so.24.1 00:01:33.689 [262/268] Linking target lib/librte_hash.so.24.1 00:01:33.689 [263/268] Linking target lib/librte_security.so.24.1 00:01:33.948 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:33.948 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:33.948 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:33.948 [267/268] Linking target lib/librte_vhost.so.24.1 00:01:33.948 [268/268] Linking target lib/librte_power.so.24.1 00:01:33.948 INFO: autodetecting backend as ninja 00:01:33.948 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:34.884 CC lib/log/log.o 00:01:34.884 CC lib/log/log_flags.o 00:01:34.884 CC lib/log/log_deprecated.o 00:01:34.884 CC lib/ut_mock/mock.o 00:01:34.884 CC lib/ut/ut.o 00:01:35.142 LIB libspdk_log.a 00:01:35.142 LIB libspdk_ut_mock.a 00:01:35.142 LIB libspdk_ut.a 00:01:35.142 SO libspdk_log.so.7.0 00:01:35.142 SO libspdk_ut.so.2.0 00:01:35.142 SO libspdk_ut_mock.so.6.0 00:01:35.142 SYMLINK libspdk_ut_mock.so 00:01:35.142 SYMLINK libspdk_ut.so 00:01:35.142 SYMLINK libspdk_log.so 00:01:35.399 CC lib/util/base64.o 00:01:35.399 CC lib/util/bit_array.o 00:01:35.399 CC lib/util/cpuset.o 00:01:35.399 CC lib/util/crc16.o 00:01:35.399 CC lib/util/crc32.o 00:01:35.399 CC lib/util/crc32c.o 00:01:35.399 CC lib/util/crc32_ieee.o 00:01:35.399 CC lib/util/crc64.o 00:01:35.399 CC lib/util/dif.o 00:01:35.399 CC lib/util/fd.o 00:01:35.399 CC lib/util/file.o 00:01:35.399 CC lib/util/hexlify.o 00:01:35.399 CC lib/util/iov.o 00:01:35.399 CC lib/util/math.o 00:01:35.399 CC lib/util/pipe.o 00:01:35.399 CC lib/util/string.o 00:01:35.399 CC lib/util/strerror_tls.o 00:01:35.399 CC lib/util/uuid.o 00:01:35.399 CC lib/util/xor.o 00:01:35.399 CC lib/util/fd_group.o 00:01:35.399 CC lib/util/zipf.o 00:01:35.399 CC lib/ioat/ioat.o 00:01:35.399 CXX lib/trace_parser/trace.o 00:01:35.399 CC lib/dma/dma.o 00:01:35.657 CC lib/vfio_user/host/vfio_user_pci.o 00:01:35.657 CC lib/vfio_user/host/vfio_user.o 00:01:35.657 LIB libspdk_dma.a 00:01:35.657 SO libspdk_dma.so.4.0 00:01:35.657 LIB libspdk_ioat.a 00:01:35.657 SO libspdk_ioat.so.7.0 00:01:35.657 SYMLINK libspdk_dma.so 00:01:35.915 SYMLINK libspdk_ioat.so 00:01:35.915 LIB libspdk_vfio_user.a 00:01:35.915 SO libspdk_vfio_user.so.5.0 00:01:35.915 LIB libspdk_util.a 00:01:35.915 SYMLINK libspdk_vfio_user.so 00:01:35.915 SO libspdk_util.so.9.0 00:01:36.172 SYMLINK libspdk_util.so 00:01:36.172 LIB libspdk_trace_parser.a 00:01:36.172 SO libspdk_trace_parser.so.5.0 00:01:36.172 SYMLINK libspdk_trace_parser.so 00:01:36.430 CC lib/env_dpdk/env.o 00:01:36.430 CC lib/env_dpdk/memory.o 00:01:36.430 CC lib/env_dpdk/pci.o 00:01:36.430 CC lib/env_dpdk/init.o 00:01:36.430 CC lib/env_dpdk/threads.o 00:01:36.430 CC lib/env_dpdk/pci_ioat.o 00:01:36.430 CC lib/env_dpdk/pci_virtio.o 00:01:36.430 CC lib/env_dpdk/pci_vmd.o 00:01:36.430 CC lib/env_dpdk/pci_idxd.o 00:01:36.430 CC lib/env_dpdk/sigbus_handler.o 00:01:36.430 CC lib/env_dpdk/pci_event.o 00:01:36.430 CC lib/env_dpdk/pci_dpdk.o 00:01:36.430 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:36.430 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:36.430 CC lib/vmd/vmd.o 00:01:36.430 CC lib/vmd/led.o 00:01:36.430 CC lib/rdma/common.o 00:01:36.430 CC lib/conf/conf.o 00:01:36.430 CC lib/rdma/rdma_verbs.o 00:01:36.430 CC lib/json/json_parse.o 00:01:36.430 CC lib/json/json_util.o 00:01:36.430 CC lib/json/json_write.o 00:01:36.430 CC lib/idxd/idxd_user.o 00:01:36.430 CC lib/idxd/idxd.o 00:01:36.430 CC lib/idxd/idxd_kernel.o 00:01:36.688 LIB libspdk_conf.a 00:01:36.688 SO libspdk_conf.so.6.0 00:01:36.688 LIB libspdk_rdma.a 00:01:36.688 LIB libspdk_json.a 00:01:36.688 SO libspdk_rdma.so.6.0 00:01:36.688 SYMLINK libspdk_conf.so 00:01:36.688 SO libspdk_json.so.6.0 00:01:36.688 SYMLINK libspdk_rdma.so 00:01:36.688 SYMLINK libspdk_json.so 00:01:36.688 LIB libspdk_idxd.a 00:01:36.947 LIB libspdk_vmd.a 00:01:36.947 SO libspdk_idxd.so.12.0 00:01:36.947 SO libspdk_vmd.so.6.0 00:01:36.947 SYMLINK libspdk_idxd.so 00:01:36.947 SYMLINK libspdk_vmd.so 00:01:36.947 CC lib/jsonrpc/jsonrpc_server.o 00:01:36.947 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:36.947 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:36.947 CC lib/jsonrpc/jsonrpc_client.o 00:01:37.205 LIB libspdk_jsonrpc.a 00:01:37.205 SO libspdk_jsonrpc.so.6.0 00:01:37.205 SYMLINK libspdk_jsonrpc.so 00:01:37.205 LIB libspdk_env_dpdk.a 00:01:37.463 SO libspdk_env_dpdk.so.14.1 00:01:37.463 SYMLINK libspdk_env_dpdk.so 00:01:37.463 CC lib/rpc/rpc.o 00:01:37.721 LIB libspdk_rpc.a 00:01:37.721 SO libspdk_rpc.so.6.0 00:01:37.978 SYMLINK libspdk_rpc.so 00:01:38.239 CC lib/notify/notify.o 00:01:38.239 CC lib/notify/notify_rpc.o 00:01:38.239 CC lib/trace/trace.o 00:01:38.239 CC lib/trace/trace_flags.o 00:01:38.239 CC lib/trace/trace_rpc.o 00:01:38.239 CC lib/keyring/keyring.o 00:01:38.239 CC lib/keyring/keyring_rpc.o 00:01:38.239 LIB libspdk_notify.a 00:01:38.239 SO libspdk_notify.so.6.0 00:01:38.239 LIB libspdk_keyring.a 00:01:38.239 LIB libspdk_trace.a 00:01:38.239 SO libspdk_keyring.so.1.0 00:01:38.497 SYMLINK libspdk_notify.so 00:01:38.497 SO libspdk_trace.so.10.0 00:01:38.497 SYMLINK libspdk_keyring.so 00:01:38.497 SYMLINK libspdk_trace.so 00:01:38.755 CC lib/sock/sock.o 00:01:38.755 CC lib/sock/sock_rpc.o 00:01:38.755 CC lib/thread/thread.o 00:01:38.755 CC lib/thread/iobuf.o 00:01:39.047 LIB libspdk_sock.a 00:01:39.047 SO libspdk_sock.so.9.0 00:01:39.047 SYMLINK libspdk_sock.so 00:01:39.311 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:39.311 CC lib/nvme/nvme_ctrlr.o 00:01:39.311 CC lib/nvme/nvme_fabric.o 00:01:39.311 CC lib/nvme/nvme_ns_cmd.o 00:01:39.311 CC lib/nvme/nvme_ns.o 00:01:39.311 CC lib/nvme/nvme_pcie_common.o 00:01:39.311 CC lib/nvme/nvme_pcie.o 00:01:39.311 CC lib/nvme/nvme_qpair.o 00:01:39.311 CC lib/nvme/nvme.o 00:01:39.311 CC lib/nvme/nvme_quirks.o 00:01:39.311 CC lib/nvme/nvme_transport.o 00:01:39.311 CC lib/nvme/nvme_discovery.o 00:01:39.311 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:39.311 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:39.311 CC lib/nvme/nvme_tcp.o 00:01:39.311 CC lib/nvme/nvme_opal.o 00:01:39.311 CC lib/nvme/nvme_io_msg.o 00:01:39.311 CC lib/nvme/nvme_poll_group.o 00:01:39.311 CC lib/nvme/nvme_zns.o 00:01:39.311 CC lib/nvme/nvme_stubs.o 00:01:39.311 CC lib/nvme/nvme_auth.o 00:01:39.311 CC lib/nvme/nvme_cuse.o 00:01:39.311 CC lib/nvme/nvme_rdma.o 00:01:39.876 LIB libspdk_thread.a 00:01:39.876 SO libspdk_thread.so.10.0 00:01:39.876 SYMLINK libspdk_thread.so 00:01:40.134 CC lib/virtio/virtio.o 00:01:40.134 CC lib/init/json_config.o 00:01:40.134 CC lib/virtio/virtio_vhost_user.o 00:01:40.134 CC lib/init/subsystem.o 00:01:40.134 CC lib/virtio/virtio_vfio_user.o 00:01:40.134 CC lib/init/subsystem_rpc.o 00:01:40.134 CC lib/virtio/virtio_pci.o 00:01:40.134 CC lib/init/rpc.o 00:01:40.134 CC lib/accel/accel.o 00:01:40.134 CC lib/accel/accel_rpc.o 00:01:40.134 CC lib/accel/accel_sw.o 00:01:40.134 CC lib/blob/request.o 00:01:40.134 CC lib/blob/blobstore.o 00:01:40.134 CC lib/blob/zeroes.o 00:01:40.134 CC lib/blob/blob_bs_dev.o 00:01:40.393 LIB libspdk_init.a 00:01:40.393 SO libspdk_init.so.5.0 00:01:40.393 LIB libspdk_virtio.a 00:01:40.393 SO libspdk_virtio.so.7.0 00:01:40.393 SYMLINK libspdk_init.so 00:01:40.652 SYMLINK libspdk_virtio.so 00:01:40.652 CC lib/event/app.o 00:01:40.652 CC lib/event/reactor.o 00:01:40.652 CC lib/event/log_rpc.o 00:01:40.652 CC lib/event/app_rpc.o 00:01:40.652 CC lib/event/scheduler_static.o 00:01:40.911 LIB libspdk_accel.a 00:01:40.911 SO libspdk_accel.so.15.0 00:01:40.911 LIB libspdk_nvme.a 00:01:40.911 SYMLINK libspdk_accel.so 00:01:41.170 LIB libspdk_event.a 00:01:41.170 SO libspdk_nvme.so.13.0 00:01:41.170 SO libspdk_event.so.13.1 00:01:41.170 SYMLINK libspdk_event.so 00:01:41.170 CC lib/bdev/bdev.o 00:01:41.170 CC lib/bdev/bdev_zone.o 00:01:41.170 CC lib/bdev/bdev_rpc.o 00:01:41.170 CC lib/bdev/part.o 00:01:41.170 CC lib/bdev/scsi_nvme.o 00:01:41.429 SYMLINK libspdk_nvme.so 00:01:42.365 LIB libspdk_blob.a 00:01:42.365 SO libspdk_blob.so.11.0 00:01:42.365 SYMLINK libspdk_blob.so 00:01:42.623 CC lib/blobfs/blobfs.o 00:01:42.623 CC lib/blobfs/tree.o 00:01:42.623 CC lib/lvol/lvol.o 00:01:42.884 LIB libspdk_bdev.a 00:01:42.884 SO libspdk_bdev.so.15.0 00:01:43.140 SYMLINK libspdk_bdev.so 00:01:43.140 LIB libspdk_blobfs.a 00:01:43.141 SO libspdk_blobfs.so.10.0 00:01:43.141 LIB libspdk_lvol.a 00:01:43.398 SYMLINK libspdk_blobfs.so 00:01:43.398 SO libspdk_lvol.so.10.0 00:01:43.398 CC lib/scsi/dev.o 00:01:43.399 CC lib/nvmf/ctrlr.o 00:01:43.399 CC lib/scsi/port.o 00:01:43.399 CC lib/nvmf/ctrlr_discovery.o 00:01:43.399 CC lib/scsi/lun.o 00:01:43.399 CC lib/nvmf/ctrlr_bdev.o 00:01:43.399 CC lib/scsi/scsi.o 00:01:43.399 CC lib/nvmf/subsystem.o 00:01:43.399 CC lib/scsi/scsi_bdev.o 00:01:43.399 CC lib/nvmf/nvmf_rpc.o 00:01:43.399 CC lib/nvmf/nvmf.o 00:01:43.399 CC lib/scsi/scsi_pr.o 00:01:43.399 CC lib/scsi/scsi_rpc.o 00:01:43.399 CC lib/scsi/task.o 00:01:43.399 CC lib/nvmf/transport.o 00:01:43.399 CC lib/nvmf/tcp.o 00:01:43.399 CC lib/nvmf/stubs.o 00:01:43.399 CC lib/nvmf/mdns_server.o 00:01:43.399 CC lib/nvmf/rdma.o 00:01:43.399 CC lib/nvmf/auth.o 00:01:43.399 SYMLINK libspdk_lvol.so 00:01:43.399 CC lib/nbd/nbd.o 00:01:43.399 CC lib/ftl/ftl_core.o 00:01:43.399 CC lib/nbd/nbd_rpc.o 00:01:43.399 CC lib/ublk/ublk.o 00:01:43.399 CC lib/ftl/ftl_init.o 00:01:43.399 CC lib/ftl/ftl_layout.o 00:01:43.399 CC lib/ublk/ublk_rpc.o 00:01:43.399 CC lib/ftl/ftl_debug.o 00:01:43.399 CC lib/ftl/ftl_io.o 00:01:43.399 CC lib/ftl/ftl_sb.o 00:01:43.399 CC lib/ftl/ftl_l2p.o 00:01:43.399 CC lib/ftl/ftl_l2p_flat.o 00:01:43.399 CC lib/ftl/ftl_nv_cache.o 00:01:43.399 CC lib/ftl/ftl_band.o 00:01:43.399 CC lib/ftl/ftl_band_ops.o 00:01:43.399 CC lib/ftl/ftl_writer.o 00:01:43.399 CC lib/ftl/ftl_rq.o 00:01:43.399 CC lib/ftl/ftl_reloc.o 00:01:43.399 CC lib/ftl/ftl_l2p_cache.o 00:01:43.399 CC lib/ftl/ftl_p2l.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:43.399 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:43.399 CC lib/ftl/utils/ftl_conf.o 00:01:43.399 CC lib/ftl/utils/ftl_bitmap.o 00:01:43.399 CC lib/ftl/utils/ftl_mempool.o 00:01:43.399 CC lib/ftl/utils/ftl_md.o 00:01:43.399 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:43.399 CC lib/ftl/utils/ftl_property.o 00:01:43.399 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:43.399 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:43.399 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:43.399 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:43.399 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:43.399 CC lib/ftl/base/ftl_base_dev.o 00:01:43.399 CC lib/ftl/base/ftl_base_bdev.o 00:01:43.399 CC lib/ftl/ftl_trace.o 00:01:43.965 LIB libspdk_nbd.a 00:01:43.965 SO libspdk_nbd.so.7.0 00:01:43.965 LIB libspdk_scsi.a 00:01:43.965 SYMLINK libspdk_nbd.so 00:01:43.965 SO libspdk_scsi.so.9.0 00:01:43.965 SYMLINK libspdk_scsi.so 00:01:43.965 LIB libspdk_ublk.a 00:01:43.965 SO libspdk_ublk.so.3.0 00:01:44.223 SYMLINK libspdk_ublk.so 00:01:44.223 LIB libspdk_ftl.a 00:01:44.223 CC lib/vhost/vhost.o 00:01:44.223 CC lib/iscsi/conn.o 00:01:44.223 CC lib/vhost/vhost_rpc.o 00:01:44.223 CC lib/iscsi/iscsi.o 00:01:44.223 CC lib/iscsi/init_grp.o 00:01:44.223 CC lib/vhost/vhost_scsi.o 00:01:44.223 CC lib/iscsi/md5.o 00:01:44.223 CC lib/vhost/rte_vhost_user.o 00:01:44.223 CC lib/vhost/vhost_blk.o 00:01:44.223 CC lib/iscsi/param.o 00:01:44.223 CC lib/iscsi/portal_grp.o 00:01:44.223 CC lib/iscsi/tgt_node.o 00:01:44.223 CC lib/iscsi/iscsi_subsystem.o 00:01:44.223 CC lib/iscsi/iscsi_rpc.o 00:01:44.223 CC lib/iscsi/task.o 00:01:44.481 SO libspdk_ftl.so.9.0 00:01:44.740 SYMLINK libspdk_ftl.so 00:01:44.998 LIB libspdk_nvmf.a 00:01:44.998 SO libspdk_nvmf.so.18.1 00:01:44.998 LIB libspdk_vhost.a 00:01:45.257 SO libspdk_vhost.so.8.0 00:01:45.257 SYMLINK libspdk_nvmf.so 00:01:45.257 SYMLINK libspdk_vhost.so 00:01:45.257 LIB libspdk_iscsi.a 00:01:45.257 SO libspdk_iscsi.so.8.0 00:01:45.516 SYMLINK libspdk_iscsi.so 00:01:46.083 CC module/env_dpdk/env_dpdk_rpc.o 00:01:46.083 LIB libspdk_env_dpdk_rpc.a 00:01:46.083 CC module/accel/ioat/accel_ioat.o 00:01:46.083 CC module/accel/ioat/accel_ioat_rpc.o 00:01:46.083 CC module/accel/iaa/accel_iaa.o 00:01:46.083 CC module/blob/bdev/blob_bdev.o 00:01:46.083 CC module/accel/iaa/accel_iaa_rpc.o 00:01:46.083 CC module/accel/error/accel_error.o 00:01:46.083 CC module/accel/error/accel_error_rpc.o 00:01:46.083 CC module/accel/dsa/accel_dsa.o 00:01:46.083 CC module/accel/dsa/accel_dsa_rpc.o 00:01:46.083 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:46.083 CC module/keyring/file/keyring.o 00:01:46.083 CC module/keyring/file/keyring_rpc.o 00:01:46.083 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:46.083 CC module/keyring/linux/keyring_rpc.o 00:01:46.083 CC module/keyring/linux/keyring.o 00:01:46.083 SO libspdk_env_dpdk_rpc.so.6.0 00:01:46.083 CC module/scheduler/gscheduler/gscheduler.o 00:01:46.083 CC module/sock/posix/posix.o 00:01:46.083 SYMLINK libspdk_env_dpdk_rpc.so 00:01:46.083 LIB libspdk_scheduler_dpdk_governor.a 00:01:46.341 LIB libspdk_keyring_file.a 00:01:46.341 LIB libspdk_keyring_linux.a 00:01:46.341 LIB libspdk_scheduler_gscheduler.a 00:01:46.341 LIB libspdk_accel_ioat.a 00:01:46.341 LIB libspdk_accel_iaa.a 00:01:46.342 LIB libspdk_accel_error.a 00:01:46.342 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:46.342 LIB libspdk_scheduler_dynamic.a 00:01:46.342 SO libspdk_keyring_file.so.1.0 00:01:46.342 SO libspdk_scheduler_gscheduler.so.4.0 00:01:46.342 SO libspdk_keyring_linux.so.1.0 00:01:46.342 SO libspdk_accel_ioat.so.6.0 00:01:46.342 SO libspdk_accel_iaa.so.3.0 00:01:46.342 SO libspdk_scheduler_dynamic.so.4.0 00:01:46.342 SO libspdk_accel_error.so.2.0 00:01:46.342 LIB libspdk_blob_bdev.a 00:01:46.342 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:46.342 LIB libspdk_accel_dsa.a 00:01:46.342 SYMLINK libspdk_scheduler_gscheduler.so 00:01:46.342 SO libspdk_blob_bdev.so.11.0 00:01:46.342 SYMLINK libspdk_keyring_file.so 00:01:46.342 SYMLINK libspdk_keyring_linux.so 00:01:46.342 SYMLINK libspdk_accel_error.so 00:01:46.342 SYMLINK libspdk_accel_ioat.so 00:01:46.342 SYMLINK libspdk_scheduler_dynamic.so 00:01:46.342 SO libspdk_accel_dsa.so.5.0 00:01:46.342 SYMLINK libspdk_accel_iaa.so 00:01:46.342 SYMLINK libspdk_blob_bdev.so 00:01:46.342 SYMLINK libspdk_accel_dsa.so 00:01:46.600 LIB libspdk_sock_posix.a 00:01:46.600 SO libspdk_sock_posix.so.6.0 00:01:46.858 CC module/bdev/malloc/bdev_malloc.o 00:01:46.858 CC module/bdev/nvme/bdev_nvme.o 00:01:46.858 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:46.858 CC module/bdev/error/vbdev_error.o 00:01:46.858 CC module/bdev/error/vbdev_error_rpc.o 00:01:46.858 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:46.858 CC module/bdev/nvme/nvme_rpc.o 00:01:46.858 CC module/bdev/nvme/vbdev_opal.o 00:01:46.858 CC module/bdev/nvme/bdev_mdns_client.o 00:01:46.858 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:46.858 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:46.858 CC module/bdev/split/vbdev_split.o 00:01:46.858 CC module/bdev/split/vbdev_split_rpc.o 00:01:46.858 CC module/bdev/passthru/vbdev_passthru.o 00:01:46.858 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:46.858 CC module/bdev/aio/bdev_aio.o 00:01:46.858 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:46.858 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:46.858 CC module/bdev/aio/bdev_aio_rpc.o 00:01:46.858 CC module/bdev/null/bdev_null.o 00:01:46.858 CC module/bdev/null/bdev_null_rpc.o 00:01:46.858 CC module/blobfs/bdev/blobfs_bdev.o 00:01:46.858 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:46.858 SYMLINK libspdk_sock_posix.so 00:01:46.858 CC module/bdev/gpt/vbdev_gpt.o 00:01:46.858 CC module/bdev/gpt/gpt.o 00:01:46.858 CC module/bdev/iscsi/bdev_iscsi.o 00:01:46.858 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:46.859 CC module/bdev/delay/vbdev_delay.o 00:01:46.859 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:46.859 CC module/bdev/ftl/bdev_ftl.o 00:01:46.859 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:46.859 CC module/bdev/lvol/vbdev_lvol.o 00:01:46.859 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:46.859 CC module/bdev/raid/bdev_raid.o 00:01:46.859 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:46.859 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:46.859 CC module/bdev/raid/bdev_raid_sb.o 00:01:46.859 CC module/bdev/raid/bdev_raid_rpc.o 00:01:46.859 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:46.859 CC module/bdev/raid/raid0.o 00:01:46.859 CC module/bdev/raid/raid1.o 00:01:46.859 CC module/bdev/raid/concat.o 00:01:47.117 LIB libspdk_bdev_split.a 00:01:47.117 LIB libspdk_bdev_error.a 00:01:47.117 LIB libspdk_blobfs_bdev.a 00:01:47.117 SO libspdk_bdev_split.so.6.0 00:01:47.117 LIB libspdk_bdev_null.a 00:01:47.117 SO libspdk_bdev_error.so.6.0 00:01:47.117 SO libspdk_blobfs_bdev.so.6.0 00:01:47.117 LIB libspdk_bdev_ftl.a 00:01:47.117 SO libspdk_bdev_null.so.6.0 00:01:47.117 LIB libspdk_bdev_gpt.a 00:01:47.117 SYMLINK libspdk_bdev_split.so 00:01:47.117 LIB libspdk_bdev_passthru.a 00:01:47.117 SO libspdk_bdev_ftl.so.6.0 00:01:47.117 SYMLINK libspdk_blobfs_bdev.so 00:01:47.117 SYMLINK libspdk_bdev_error.so 00:01:47.117 SO libspdk_bdev_gpt.so.6.0 00:01:47.117 LIB libspdk_bdev_iscsi.a 00:01:47.117 LIB libspdk_bdev_malloc.a 00:01:47.117 LIB libspdk_bdev_aio.a 00:01:47.117 SYMLINK libspdk_bdev_null.so 00:01:47.117 LIB libspdk_bdev_zone_block.a 00:01:47.117 SO libspdk_bdev_passthru.so.6.0 00:01:47.117 SO libspdk_bdev_iscsi.so.6.0 00:01:47.117 SO libspdk_bdev_aio.so.6.0 00:01:47.117 SO libspdk_bdev_malloc.so.6.0 00:01:47.117 LIB libspdk_bdev_delay.a 00:01:47.117 SO libspdk_bdev_zone_block.so.6.0 00:01:47.117 SYMLINK libspdk_bdev_gpt.so 00:01:47.117 SYMLINK libspdk_bdev_ftl.so 00:01:47.117 SO libspdk_bdev_delay.so.6.0 00:01:47.117 SYMLINK libspdk_bdev_passthru.so 00:01:47.375 SYMLINK libspdk_bdev_iscsi.so 00:01:47.375 SYMLINK libspdk_bdev_aio.so 00:01:47.375 SYMLINK libspdk_bdev_malloc.so 00:01:47.375 SYMLINK libspdk_bdev_zone_block.so 00:01:47.375 SYMLINK libspdk_bdev_delay.so 00:01:47.375 LIB libspdk_bdev_lvol.a 00:01:47.375 LIB libspdk_bdev_virtio.a 00:01:47.375 SO libspdk_bdev_lvol.so.6.0 00:01:47.375 SO libspdk_bdev_virtio.so.6.0 00:01:47.375 SYMLINK libspdk_bdev_lvol.so 00:01:47.375 SYMLINK libspdk_bdev_virtio.so 00:01:47.634 LIB libspdk_bdev_raid.a 00:01:47.634 SO libspdk_bdev_raid.so.6.0 00:01:47.892 SYMLINK libspdk_bdev_raid.so 00:01:48.460 LIB libspdk_bdev_nvme.a 00:01:48.460 SO libspdk_bdev_nvme.so.7.0 00:01:48.460 SYMLINK libspdk_bdev_nvme.so 00:01:49.027 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:49.027 CC module/event/subsystems/iobuf/iobuf.o 00:01:49.027 CC module/event/subsystems/vmd/vmd.o 00:01:49.027 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:49.027 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:49.027 CC module/event/subsystems/scheduler/scheduler.o 00:01:49.027 CC module/event/subsystems/sock/sock.o 00:01:49.027 CC module/event/subsystems/keyring/keyring.o 00:01:49.285 LIB libspdk_event_iobuf.a 00:01:49.285 LIB libspdk_event_vhost_blk.a 00:01:49.285 LIB libspdk_event_vmd.a 00:01:49.285 LIB libspdk_event_scheduler.a 00:01:49.285 LIB libspdk_event_keyring.a 00:01:49.285 LIB libspdk_event_sock.a 00:01:49.285 SO libspdk_event_scheduler.so.4.0 00:01:49.285 SO libspdk_event_iobuf.so.3.0 00:01:49.285 SO libspdk_event_vhost_blk.so.3.0 00:01:49.285 SO libspdk_event_vmd.so.6.0 00:01:49.285 SO libspdk_event_keyring.so.1.0 00:01:49.285 SO libspdk_event_sock.so.5.0 00:01:49.285 SYMLINK libspdk_event_vhost_blk.so 00:01:49.285 SYMLINK libspdk_event_scheduler.so 00:01:49.285 SYMLINK libspdk_event_iobuf.so 00:01:49.285 SYMLINK libspdk_event_vmd.so 00:01:49.285 SYMLINK libspdk_event_keyring.so 00:01:49.285 SYMLINK libspdk_event_sock.so 00:01:49.544 CC module/event/subsystems/accel/accel.o 00:01:49.803 LIB libspdk_event_accel.a 00:01:49.803 SO libspdk_event_accel.so.6.0 00:01:49.803 SYMLINK libspdk_event_accel.so 00:01:50.061 CC module/event/subsystems/bdev/bdev.o 00:01:50.061 LIB libspdk_event_bdev.a 00:01:50.320 SO libspdk_event_bdev.so.6.0 00:01:50.320 SYMLINK libspdk_event_bdev.so 00:01:50.578 CC module/event/subsystems/scsi/scsi.o 00:01:50.578 CC module/event/subsystems/ublk/ublk.o 00:01:50.578 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:50.578 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:50.578 CC module/event/subsystems/nbd/nbd.o 00:01:50.578 LIB libspdk_event_ublk.a 00:01:50.836 LIB libspdk_event_scsi.a 00:01:50.836 LIB libspdk_event_nbd.a 00:01:50.836 SO libspdk_event_ublk.so.3.0 00:01:50.836 SO libspdk_event_scsi.so.6.0 00:01:50.836 SO libspdk_event_nbd.so.6.0 00:01:50.836 LIB libspdk_event_nvmf.a 00:01:50.836 SYMLINK libspdk_event_ublk.so 00:01:50.836 SYMLINK libspdk_event_nbd.so 00:01:50.836 SYMLINK libspdk_event_scsi.so 00:01:50.836 SO libspdk_event_nvmf.so.6.0 00:01:50.836 SYMLINK libspdk_event_nvmf.so 00:01:51.095 CC module/event/subsystems/iscsi/iscsi.o 00:01:51.095 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:51.354 LIB libspdk_event_vhost_scsi.a 00:01:51.354 LIB libspdk_event_iscsi.a 00:01:51.354 SO libspdk_event_vhost_scsi.so.3.0 00:01:51.354 SO libspdk_event_iscsi.so.6.0 00:01:51.354 SYMLINK libspdk_event_vhost_scsi.so 00:01:51.354 SYMLINK libspdk_event_iscsi.so 00:01:51.612 SO libspdk.so.6.0 00:01:51.612 SYMLINK libspdk.so 00:01:51.875 CXX app/trace/trace.o 00:01:51.875 CC app/spdk_nvme_discover/discovery_aer.o 00:01:51.875 CC app/spdk_lspci/spdk_lspci.o 00:01:51.875 CC app/spdk_nvme_perf/perf.o 00:01:51.875 CC app/trace_record/trace_record.o 00:01:51.875 CC app/spdk_top/spdk_top.o 00:01:51.875 CC app/spdk_nvme_identify/identify.o 00:01:51.875 TEST_HEADER include/spdk/accel.h 00:01:51.875 TEST_HEADER include/spdk/accel_module.h 00:01:51.875 TEST_HEADER include/spdk/base64.h 00:01:51.875 TEST_HEADER include/spdk/assert.h 00:01:51.875 TEST_HEADER include/spdk/barrier.h 00:01:51.875 TEST_HEADER include/spdk/bdev_zone.h 00:01:51.875 TEST_HEADER include/spdk/bdev_module.h 00:01:51.875 TEST_HEADER include/spdk/bdev.h 00:01:51.875 TEST_HEADER include/spdk/bit_array.h 00:01:51.875 TEST_HEADER include/spdk/bit_pool.h 00:01:51.875 TEST_HEADER include/spdk/blob_bdev.h 00:01:51.875 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:51.875 TEST_HEADER include/spdk/blobfs.h 00:01:51.875 TEST_HEADER include/spdk/conf.h 00:01:51.875 TEST_HEADER include/spdk/blob.h 00:01:51.875 CC test/rpc_client/rpc_client_test.o 00:01:51.875 TEST_HEADER include/spdk/cpuset.h 00:01:51.875 TEST_HEADER include/spdk/config.h 00:01:51.875 TEST_HEADER include/spdk/crc16.h 00:01:51.875 TEST_HEADER include/spdk/crc64.h 00:01:51.875 TEST_HEADER include/spdk/crc32.h 00:01:51.875 TEST_HEADER include/spdk/dif.h 00:01:51.875 TEST_HEADER include/spdk/endian.h 00:01:51.875 TEST_HEADER include/spdk/env_dpdk.h 00:01:51.875 TEST_HEADER include/spdk/dma.h 00:01:51.875 TEST_HEADER include/spdk/event.h 00:01:51.875 TEST_HEADER include/spdk/fd_group.h 00:01:51.875 TEST_HEADER include/spdk/env.h 00:01:51.875 TEST_HEADER include/spdk/file.h 00:01:51.875 TEST_HEADER include/spdk/fd.h 00:01:51.875 TEST_HEADER include/spdk/ftl.h 00:01:51.875 TEST_HEADER include/spdk/gpt_spec.h 00:01:51.875 TEST_HEADER include/spdk/histogram_data.h 00:01:51.875 TEST_HEADER include/spdk/hexlify.h 00:01:51.875 TEST_HEADER include/spdk/idxd_spec.h 00:01:51.875 TEST_HEADER include/spdk/idxd.h 00:01:51.875 TEST_HEADER include/spdk/init.h 00:01:51.875 TEST_HEADER include/spdk/ioat_spec.h 00:01:51.875 TEST_HEADER include/spdk/ioat.h 00:01:51.875 TEST_HEADER include/spdk/iscsi_spec.h 00:01:51.875 CC app/spdk_dd/spdk_dd.o 00:01:51.875 TEST_HEADER include/spdk/json.h 00:01:51.875 CC app/vhost/vhost.o 00:01:51.875 CC app/iscsi_tgt/iscsi_tgt.o 00:01:51.875 CC app/nvmf_tgt/nvmf_main.o 00:01:51.875 TEST_HEADER include/spdk/jsonrpc.h 00:01:51.875 TEST_HEADER include/spdk/keyring.h 00:01:51.875 TEST_HEADER include/spdk/keyring_module.h 00:01:51.875 TEST_HEADER include/spdk/log.h 00:01:51.875 TEST_HEADER include/spdk/likely.h 00:01:51.875 TEST_HEADER include/spdk/lvol.h 00:01:51.875 TEST_HEADER include/spdk/mmio.h 00:01:51.875 TEST_HEADER include/spdk/memory.h 00:01:51.875 TEST_HEADER include/spdk/nbd.h 00:01:51.875 TEST_HEADER include/spdk/notify.h 00:01:51.875 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:51.875 TEST_HEADER include/spdk/nvme.h 00:01:51.875 TEST_HEADER include/spdk/nvme_intel.h 00:01:51.875 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:51.875 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:51.875 TEST_HEADER include/spdk/nvme_spec.h 00:01:51.875 TEST_HEADER include/spdk/nvme_zns.h 00:01:51.875 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:51.875 TEST_HEADER include/spdk/nvmf.h 00:01:51.875 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:51.875 TEST_HEADER include/spdk/nvmf_spec.h 00:01:51.875 TEST_HEADER include/spdk/nvmf_transport.h 00:01:51.875 TEST_HEADER include/spdk/opal.h 00:01:51.875 TEST_HEADER include/spdk/pci_ids.h 00:01:51.875 TEST_HEADER include/spdk/opal_spec.h 00:01:51.875 TEST_HEADER include/spdk/reduce.h 00:01:51.875 TEST_HEADER include/spdk/pipe.h 00:01:51.875 TEST_HEADER include/spdk/rpc.h 00:01:51.875 TEST_HEADER include/spdk/queue.h 00:01:51.875 TEST_HEADER include/spdk/scsi.h 00:01:51.875 TEST_HEADER include/spdk/scheduler.h 00:01:51.875 CC app/spdk_tgt/spdk_tgt.o 00:01:51.875 TEST_HEADER include/spdk/scsi_spec.h 00:01:51.875 TEST_HEADER include/spdk/sock.h 00:01:51.875 TEST_HEADER include/spdk/stdinc.h 00:01:51.875 TEST_HEADER include/spdk/string.h 00:01:51.875 TEST_HEADER include/spdk/thread.h 00:01:51.875 TEST_HEADER include/spdk/trace.h 00:01:51.875 TEST_HEADER include/spdk/trace_parser.h 00:01:51.875 TEST_HEADER include/spdk/ublk.h 00:01:51.875 TEST_HEADER include/spdk/tree.h 00:01:51.875 TEST_HEADER include/spdk/util.h 00:01:51.875 TEST_HEADER include/spdk/uuid.h 00:01:51.875 TEST_HEADER include/spdk/version.h 00:01:51.875 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:51.875 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:51.875 TEST_HEADER include/spdk/vhost.h 00:01:51.875 TEST_HEADER include/spdk/vmd.h 00:01:51.875 TEST_HEADER include/spdk/xor.h 00:01:51.875 TEST_HEADER include/spdk/zipf.h 00:01:51.875 CXX test/cpp_headers/accel.o 00:01:51.875 CXX test/cpp_headers/assert.o 00:01:51.875 CXX test/cpp_headers/accel_module.o 00:01:51.875 CXX test/cpp_headers/barrier.o 00:01:51.875 CXX test/cpp_headers/base64.o 00:01:51.875 CXX test/cpp_headers/bdev.o 00:01:51.875 CXX test/cpp_headers/bdev_module.o 00:01:51.875 CXX test/cpp_headers/bdev_zone.o 00:01:51.875 CXX test/cpp_headers/bit_array.o 00:01:51.875 CXX test/cpp_headers/bit_pool.o 00:01:51.875 CXX test/cpp_headers/blob_bdev.o 00:01:51.875 CXX test/cpp_headers/blobfs_bdev.o 00:01:51.875 CXX test/cpp_headers/blobfs.o 00:01:51.875 CXX test/cpp_headers/blob.o 00:01:51.875 CXX test/cpp_headers/conf.o 00:01:51.875 CXX test/cpp_headers/config.o 00:01:51.875 CXX test/cpp_headers/cpuset.o 00:01:51.875 CXX test/cpp_headers/crc32.o 00:01:51.875 CXX test/cpp_headers/crc16.o 00:01:51.875 CXX test/cpp_headers/crc64.o 00:01:51.875 CXX test/cpp_headers/dif.o 00:01:51.875 CC examples/vmd/led/led.o 00:01:52.143 CC examples/nvme/abort/abort.o 00:01:52.143 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:52.143 CC examples/nvme/reconnect/reconnect.o 00:01:52.143 CC examples/vmd/lsvmd/lsvmd.o 00:01:52.143 CC app/fio/nvme/fio_plugin.o 00:01:52.143 CC examples/nvme/arbitration/arbitration.o 00:01:52.143 CXX test/cpp_headers/dma.o 00:01:52.143 CC test/app/histogram_perf/histogram_perf.o 00:01:52.143 CC test/event/event_perf/event_perf.o 00:01:52.143 CC examples/ioat/verify/verify.o 00:01:52.143 CC examples/util/zipf/zipf.o 00:01:52.143 CC examples/idxd/perf/perf.o 00:01:52.143 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:52.143 CC test/nvme/e2edp/nvme_dp.o 00:01:52.143 CC test/app/stub/stub.o 00:01:52.143 CC test/app/jsoncat/jsoncat.o 00:01:52.143 CC test/env/pci/pci_ut.o 00:01:52.144 CC examples/nvme/hotplug/hotplug.o 00:01:52.144 CC test/env/memory/memory_ut.o 00:01:52.144 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:52.144 CC examples/nvme/hello_world/hello_world.o 00:01:52.144 CC examples/sock/hello_world/hello_sock.o 00:01:52.144 CC examples/blob/cli/blobcli.o 00:01:52.144 CC examples/thread/thread/thread_ex.o 00:01:52.144 CC test/nvme/aer/aer.o 00:01:52.144 CC examples/ioat/perf/perf.o 00:01:52.144 CC test/nvme/reset/reset.o 00:01:52.144 CC test/nvme/compliance/nvme_compliance.o 00:01:52.144 CC test/nvme/cuse/cuse.o 00:01:52.144 CC test/nvme/fused_ordering/fused_ordering.o 00:01:52.144 CC examples/accel/perf/accel_perf.o 00:01:52.144 CC test/env/vtophys/vtophys.o 00:01:52.144 CC test/nvme/connect_stress/connect_stress.o 00:01:52.144 CC test/nvme/boot_partition/boot_partition.o 00:01:52.144 CC test/event/reactor_perf/reactor_perf.o 00:01:52.144 CC test/nvme/sgl/sgl.o 00:01:52.144 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.144 CC examples/blob/hello_world/hello_blob.o 00:01:52.144 CC test/event/reactor/reactor.o 00:01:52.144 CC test/event/app_repeat/app_repeat.o 00:01:52.144 CC test/nvme/fdp/fdp.o 00:01:52.144 CC test/nvme/overhead/overhead.o 00:01:52.144 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.144 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:52.144 CC test/thread/poller_perf/poller_perf.o 00:01:52.144 CC test/nvme/startup/startup.o 00:01:52.144 CC app/fio/bdev/fio_plugin.o 00:01:52.144 CC test/nvme/simple_copy/simple_copy.o 00:01:52.144 CC examples/nvmf/nvmf/nvmf.o 00:01:52.144 CC test/nvme/reserve/reserve.o 00:01:52.144 CC test/app/bdev_svc/bdev_svc.o 00:01:52.144 CC test/nvme/err_injection/err_injection.o 00:01:52.144 CC test/bdev/bdevio/bdevio.o 00:01:52.144 LINK spdk_lspci 00:01:52.144 CC test/blobfs/mkfs/mkfs.o 00:01:52.144 CC test/accel/dif/dif.o 00:01:52.144 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:52.144 CC test/dma/test_dma/test_dma.o 00:01:52.144 CC test/event/scheduler/scheduler.o 00:01:52.144 LINK rpc_client_test 00:01:52.412 CC test/env/mem_callbacks/mem_callbacks.o 00:01:52.412 LINK nvmf_tgt 00:01:52.412 LINK interrupt_tgt 00:01:52.412 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:52.412 LINK spdk_tgt 00:01:52.412 LINK spdk_nvme_discover 00:01:52.412 LINK lsvmd 00:01:52.412 CC test/lvol/esnap/esnap.o 00:01:52.412 LINK histogram_perf 00:01:52.412 LINK event_perf 00:01:52.412 LINK vhost 00:01:52.412 LINK cmb_copy 00:01:52.412 CXX test/cpp_headers/endian.o 00:01:52.412 LINK pmr_persistence 00:01:52.412 LINK led 00:01:52.412 CXX test/cpp_headers/env_dpdk.o 00:01:52.412 CXX test/cpp_headers/env.o 00:01:52.412 LINK iscsi_tgt 00:01:52.412 CXX test/cpp_headers/event.o 00:01:52.412 CXX test/cpp_headers/fd_group.o 00:01:52.412 CXX test/cpp_headers/fd.o 00:01:52.412 CXX test/cpp_headers/file.o 00:01:52.412 LINK boot_partition 00:01:52.412 LINK spdk_trace_record 00:01:52.412 LINK env_dpdk_post_init 00:01:52.412 LINK startup 00:01:52.412 LINK fused_ordering 00:01:52.679 LINK jsoncat 00:01:52.679 CXX test/cpp_headers/gpt_spec.o 00:01:52.679 CXX test/cpp_headers/ftl.o 00:01:52.679 LINK reactor 00:01:52.679 LINK reactor_perf 00:01:52.679 LINK zipf 00:01:52.679 LINK hello_world 00:01:52.679 LINK doorbell_aers 00:01:52.679 LINK poller_perf 00:01:52.679 LINK hello_sock 00:01:52.679 LINK stub 00:01:52.679 LINK mkfs 00:01:52.679 LINK app_repeat 00:01:52.679 LINK vtophys 00:01:52.679 LINK hello_blob 00:01:52.679 CXX test/cpp_headers/hexlify.o 00:01:52.679 LINK reset 00:01:52.679 LINK connect_stress 00:01:52.679 LINK nvme_dp 00:01:52.679 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:52.679 CXX test/cpp_headers/histogram_data.o 00:01:52.679 CXX test/cpp_headers/idxd.o 00:01:52.679 LINK aer 00:01:52.679 CXX test/cpp_headers/idxd_spec.o 00:01:52.679 LINK overhead 00:01:52.679 CXX test/cpp_headers/init.o 00:01:52.679 LINK sgl 00:01:52.679 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:52.679 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:52.679 LINK arbitration 00:01:52.679 LINK bdev_svc 00:01:52.679 LINK err_injection 00:01:52.679 CXX test/cpp_headers/ioat.o 00:01:52.679 LINK verify 00:01:52.679 LINK reconnect 00:01:52.679 CXX test/cpp_headers/ioat_spec.o 00:01:52.679 LINK fdp 00:01:52.679 CXX test/cpp_headers/iscsi_spec.o 00:01:52.679 LINK hello_bdev 00:01:52.679 CXX test/cpp_headers/json.o 00:01:52.679 CXX test/cpp_headers/jsonrpc.o 00:01:52.679 LINK nvmf 00:01:52.679 CXX test/cpp_headers/keyring.o 00:01:52.679 CXX test/cpp_headers/keyring_module.o 00:01:52.679 LINK hotplug 00:01:52.679 LINK simple_copy 00:01:52.679 CXX test/cpp_headers/likely.o 00:01:52.679 LINK ioat_perf 00:01:52.679 CXX test/cpp_headers/log.o 00:01:52.679 LINK nvme_compliance 00:01:52.679 LINK reserve 00:01:52.679 LINK thread 00:01:52.679 LINK scheduler 00:01:52.679 CXX test/cpp_headers/lvol.o 00:01:52.679 CXX test/cpp_headers/memory.o 00:01:52.679 LINK spdk_dd 00:01:52.679 CXX test/cpp_headers/mmio.o 00:01:52.947 CXX test/cpp_headers/nbd.o 00:01:52.947 LINK pci_ut 00:01:52.947 CXX test/cpp_headers/notify.o 00:01:52.947 CXX test/cpp_headers/nvme.o 00:01:52.947 CXX test/cpp_headers/nvme_intel.o 00:01:52.948 CXX test/cpp_headers/nvme_ocssd.o 00:01:52.948 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:52.948 CXX test/cpp_headers/nvme_spec.o 00:01:52.948 CXX test/cpp_headers/nvme_zns.o 00:01:52.948 CXX test/cpp_headers/nvmf_cmd.o 00:01:52.948 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:52.948 CXX test/cpp_headers/nvmf.o 00:01:52.948 CXX test/cpp_headers/nvmf_spec.o 00:01:52.948 CXX test/cpp_headers/nvmf_transport.o 00:01:52.948 CXX test/cpp_headers/opal.o 00:01:52.948 LINK idxd_perf 00:01:52.948 CXX test/cpp_headers/pci_ids.o 00:01:52.948 CXX test/cpp_headers/opal_spec.o 00:01:52.948 LINK test_dma 00:01:52.948 LINK nvme_manage 00:01:52.948 CXX test/cpp_headers/pipe.o 00:01:52.948 CXX test/cpp_headers/queue.o 00:01:52.948 LINK accel_perf 00:01:52.948 LINK abort 00:01:52.948 CXX test/cpp_headers/reduce.o 00:01:52.948 CXX test/cpp_headers/rpc.o 00:01:52.948 CXX test/cpp_headers/scheduler.o 00:01:52.948 CXX test/cpp_headers/scsi.o 00:01:52.948 CXX test/cpp_headers/scsi_spec.o 00:01:52.948 CXX test/cpp_headers/sock.o 00:01:52.948 CXX test/cpp_headers/stdinc.o 00:01:52.948 CXX test/cpp_headers/string.o 00:01:52.948 LINK nvme_fuzz 00:01:52.948 CXX test/cpp_headers/thread.o 00:01:52.948 CXX test/cpp_headers/trace.o 00:01:52.948 LINK bdevio 00:01:52.948 LINK spdk_trace 00:01:52.948 CXX test/cpp_headers/tree.o 00:01:52.948 CXX test/cpp_headers/trace_parser.o 00:01:52.948 CXX test/cpp_headers/ublk.o 00:01:52.948 CXX test/cpp_headers/util.o 00:01:53.206 CXX test/cpp_headers/uuid.o 00:01:53.206 CXX test/cpp_headers/version.o 00:01:53.206 CXX test/cpp_headers/vfio_user_pci.o 00:01:53.206 CXX test/cpp_headers/vfio_user_spec.o 00:01:53.206 CXX test/cpp_headers/vhost.o 00:01:53.206 CXX test/cpp_headers/vmd.o 00:01:53.206 CXX test/cpp_headers/xor.o 00:01:53.206 CXX test/cpp_headers/zipf.o 00:01:53.206 LINK dif 00:01:53.206 LINK spdk_nvme_perf 00:01:53.206 LINK blobcli 00:01:53.206 LINK mem_callbacks 00:01:53.206 LINK spdk_bdev 00:01:53.206 LINK spdk_top 00:01:53.206 LINK vhost_fuzz 00:01:53.206 LINK bdevperf 00:01:53.206 LINK spdk_nvme 00:01:53.772 LINK spdk_nvme_identify 00:01:53.772 LINK cuse 00:01:53.772 LINK memory_ut 00:01:54.338 LINK iscsi_fuzz 00:01:56.240 LINK esnap 00:01:56.498 00:01:56.498 real 0m42.762s 00:01:56.498 user 6m59.011s 00:01:56.498 sys 3m26.940s 00:01:56.498 08:40:19 make -- common/autotest_common.sh@1125 -- $ xtrace_disable 00:01:56.498 08:40:19 make -- common/autotest_common.sh@10 -- $ set +x 00:01:56.498 ************************************ 00:01:56.498 END TEST make 00:01:56.498 ************************************ 00:01:56.498 08:40:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:56.498 08:40:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:56.498 08:40:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:56.498 08:40:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.498 08:40:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:56.498 08:40:19 -- pm/common@44 -- $ pid=1038996 00:01:56.498 08:40:19 -- pm/common@50 -- $ kill -TERM 1038996 00:01:56.498 08:40:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.498 08:40:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:56.498 08:40:19 -- pm/common@44 -- $ pid=1038998 00:01:56.498 08:40:19 -- pm/common@50 -- $ kill -TERM 1038998 00:01:56.498 08:40:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.498 08:40:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:56.498 08:40:19 -- pm/common@44 -- $ pid=1038999 00:01:56.498 08:40:19 -- pm/common@50 -- $ kill -TERM 1038999 00:01:56.498 08:40:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.498 08:40:19 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:56.498 08:40:19 -- pm/common@44 -- $ pid=1039022 00:01:56.498 08:40:19 -- pm/common@50 -- $ sudo -E kill -TERM 1039022 00:01:56.757 08:40:19 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:01:56.757 08:40:19 -- nvmf/common.sh@7 -- # uname -s 00:01:56.757 08:40:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:56.757 08:40:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:56.757 08:40:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:56.757 08:40:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:56.757 08:40:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:56.757 08:40:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:56.757 08:40:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:56.757 08:40:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:56.757 08:40:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:56.757 08:40:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:56.757 08:40:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:01:56.757 08:40:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:01:56.757 08:40:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:56.757 08:40:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:56.757 08:40:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:56.757 08:40:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:56.757 08:40:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:01:56.757 08:40:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:56.757 08:40:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.757 08:40:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.757 08:40:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.757 08:40:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.757 08:40:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.757 08:40:19 -- paths/export.sh@5 -- # export PATH 00:01:56.757 08:40:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.757 08:40:19 -- nvmf/common.sh@47 -- # : 0 00:01:56.757 08:40:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:56.757 08:40:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:56.757 08:40:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:56.757 08:40:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:56.757 08:40:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:56.757 08:40:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:56.757 08:40:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:56.757 08:40:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:56.757 08:40:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:56.757 08:40:19 -- spdk/autotest.sh@32 -- # uname -s 00:01:56.757 08:40:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:56.757 08:40:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:56.757 08:40:19 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:01:56.757 08:40:19 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:56.757 08:40:19 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/coredumps 00:01:56.757 08:40:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:56.757 08:40:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:56.757 08:40:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:56.757 08:40:19 -- spdk/autotest.sh@48 -- # udevadm_pid=1097096 00:01:56.757 08:40:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:56.757 08:40:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:56.757 08:40:19 -- pm/common@17 -- # local monitor 00:01:56.757 08:40:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.757 08:40:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.758 08:40:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.758 08:40:19 -- pm/common@21 -- # date +%s 00:01:56.758 08:40:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.758 08:40:19 -- pm/common@21 -- # date +%s 00:01:56.758 08:40:19 -- pm/common@25 -- # sleep 1 00:01:56.758 08:40:19 -- pm/common@21 -- # date +%s 00:01:56.758 08:40:19 -- pm/common@21 -- # date +%s 00:01:56.758 08:40:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915219 00:01:56.758 08:40:19 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915219 00:01:56.758 08:40:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915219 00:01:56.758 08:40:19 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1717915219 00:01:56.758 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915219_collect-vmstat.pm.log 00:01:56.758 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915219_collect-cpu-load.pm.log 00:01:56.758 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915219_collect-cpu-temp.pm.log 00:01:56.758 Redirecting to /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power/monitor.autotest.sh.1717915219_collect-bmc-pm.bmc.pm.log 00:01:57.695 08:40:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:57.695 08:40:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:57.695 08:40:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:01:57.695 08:40:20 -- common/autotest_common.sh@10 -- # set +x 00:01:57.695 08:40:20 -- spdk/autotest.sh@59 -- # create_test_list 00:01:57.695 08:40:20 -- common/autotest_common.sh@747 -- # xtrace_disable 00:01:57.695 08:40:20 -- common/autotest_common.sh@10 -- # set +x 00:01:57.695 08:40:20 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh 00:01:57.952 08:40:20 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:57.952 08:40:20 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:57.952 08:40:20 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output 00:01:57.952 08:40:20 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:01:57.952 08:40:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:57.952 08:40:20 -- common/autotest_common.sh@1454 -- # uname 00:01:57.952 08:40:20 -- common/autotest_common.sh@1454 -- # '[' Linux = FreeBSD ']' 00:01:57.952 08:40:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:57.952 08:40:20 -- common/autotest_common.sh@1474 -- # uname 00:01:57.952 08:40:20 -- common/autotest_common.sh@1474 -- # [[ Linux = FreeBSD ]] 00:01:57.952 08:40:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:57.952 08:40:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:57.952 08:40:20 -- spdk/autotest.sh@72 -- # hash lcov 00:01:57.952 08:40:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:57.952 08:40:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:57.952 --rc lcov_branch_coverage=1 00:01:57.952 --rc lcov_function_coverage=1 00:01:57.952 --rc genhtml_branch_coverage=1 00:01:57.952 --rc genhtml_function_coverage=1 00:01:57.952 --rc genhtml_legend=1 00:01:57.952 --rc geninfo_all_blocks=1 00:01:57.952 ' 00:01:57.952 08:40:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:57.952 --rc lcov_branch_coverage=1 00:01:57.952 --rc lcov_function_coverage=1 00:01:57.952 --rc genhtml_branch_coverage=1 00:01:57.952 --rc genhtml_function_coverage=1 00:01:57.952 --rc genhtml_legend=1 00:01:57.952 --rc geninfo_all_blocks=1 00:01:57.952 ' 00:01:57.952 08:40:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:57.952 --rc lcov_branch_coverage=1 00:01:57.952 --rc lcov_function_coverage=1 00:01:57.952 --rc genhtml_branch_coverage=1 00:01:57.952 --rc genhtml_function_coverage=1 00:01:57.952 --rc genhtml_legend=1 00:01:57.952 --rc geninfo_all_blocks=1 00:01:57.952 --no-external' 00:01:57.952 08:40:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:57.952 --rc lcov_branch_coverage=1 00:01:57.952 --rc lcov_function_coverage=1 00:01:57.952 --rc genhtml_branch_coverage=1 00:01:57.952 --rc genhtml_function_coverage=1 00:01:57.952 --rc genhtml_legend=1 00:01:57.952 --rc geninfo_all_blocks=1 00:01:57.952 --no-external' 00:01:57.952 08:40:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:57.952 lcov: LCOV version 1.14 00:01:57.952 08:40:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/cov_base.info 00:02:06.130 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:06.130 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.329 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.329 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.330 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.330 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:19.265 08:40:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:19.265 08:40:41 -- common/autotest_common.sh@723 -- # xtrace_disable 00:02:19.265 08:40:41 -- common/autotest_common.sh@10 -- # set +x 00:02:19.265 08:40:41 -- spdk/autotest.sh@91 -- # rm -f 00:02:19.265 08:40:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:21.799 0000:5f:00.0 (1b96 2600): Already using the nvme driver 00:02:21.799 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:21.799 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:21.799 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:21.799 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:21.799 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:21.799 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:21.799 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:22.057 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:22.057 08:40:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:22.057 08:40:44 -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:22.057 08:40:44 -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:22.057 08:40:44 -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:22.057 08:40:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:22.057 08:40:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:22.057 08:40:44 -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:22.057 08:40:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:22.057 08:40:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:02:22.057 08:40:44 -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:02:22.057 08:40:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:22.057 08:40:44 -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:02:22.057 08:40:44 -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:02:22.057 08:40:44 -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:02:22.057 08:40:44 -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:22.057 08:40:44 -- spdk/autotest.sh@98 -- # (( 1 > 0 )) 00:02:22.057 08:40:44 -- spdk/autotest.sh@103 -- # export PCI_BLOCKED=0000:5f:00.0 00:02:22.057 08:40:44 -- spdk/autotest.sh@103 -- # PCI_BLOCKED=0000:5f:00.0 00:02:22.057 08:40:44 -- spdk/autotest.sh@104 -- # export PCI_ZONED=0000:5f:00.0 00:02:22.057 08:40:44 -- spdk/autotest.sh@104 -- # PCI_ZONED=0000:5f:00.0 00:02:22.057 08:40:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.057 08:40:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:22.057 08:40:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:22.057 08:40:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:22.057 08:40:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:22.315 No valid GPT data, bailing 00:02:22.315 08:40:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:22.315 08:40:44 -- scripts/common.sh@391 -- # pt= 00:02:22.315 08:40:44 -- scripts/common.sh@392 -- # return 1 00:02:22.315 08:40:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:22.315 1+0 records in 00:02:22.315 1+0 records out 00:02:22.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441509 s, 237 MB/s 00:02:22.315 08:40:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.315 08:40:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:22.315 08:40:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:02:22.315 08:40:44 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:02:22.315 08:40:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:22.315 No valid GPT data, bailing 00:02:22.315 08:40:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:22.315 08:40:44 -- scripts/common.sh@391 -- # pt= 00:02:22.315 08:40:44 -- scripts/common.sh@392 -- # return 1 00:02:22.315 08:40:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:22.315 1+0 records in 00:02:22.315 1+0 records out 00:02:22.315 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532652 s, 197 MB/s 00:02:22.315 08:40:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:22.315 08:40:44 -- spdk/autotest.sh@112 -- # [[ -z 0000:5f:00.0 ]] 00:02:22.315 08:40:44 -- spdk/autotest.sh@112 -- # continue 00:02:22.315 08:40:44 -- spdk/autotest.sh@118 -- # sync 00:02:22.315 08:40:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:22.315 08:40:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:22.315 08:40:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:27.578 08:40:49 -- spdk/autotest.sh@124 -- # uname -s 00:02:27.578 08:40:49 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:27.578 08:40:49 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.578 08:40:49 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:27.578 08:40:49 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:27.578 08:40:49 -- common/autotest_common.sh@10 -- # set +x 00:02:27.578 ************************************ 00:02:27.578 START TEST setup.sh 00:02:27.578 ************************************ 00:02:27.578 08:40:49 setup.sh -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/test-setup.sh 00:02:27.578 * Looking for test storage... 00:02:27.578 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:02:27.578 08:40:49 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:27.578 08:40:49 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:27.578 08:40:49 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:02:27.578 08:40:49 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:27.578 08:40:49 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:27.578 08:40:49 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:27.578 ************************************ 00:02:27.578 START TEST acl 00:02:27.578 ************************************ 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/acl.sh 00:02:27.578 * Looking for test storage... 00:02:27.578 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:02:27.578 08:40:49 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1669 -- # local nvme bdf 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:02:27.578 08:40:49 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:02:27.579 08:40:49 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:02:27.579 08:40:49 setup.sh.acl -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:02:27.579 08:40:49 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:27.579 08:40:49 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:27.579 08:40:49 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:27.579 08:40:49 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:27.579 08:40:49 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:27.579 08:40:49 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:27.579 08:40:49 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:30.111 08:40:52 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:30.111 08:40:52 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:30.111 08:40:52 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:30.111 08:40:52 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:30.111 08:40:52 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:30.111 08:40:52 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:02:32.638 Hugepages 00:02:32.638 node hugesize free / total 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.638 00:02:32.638 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.638 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.639 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.896 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:32.896 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.896 08:40:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@21 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@21 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:32.897 08:40:55 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:32.897 08:40:55 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:32.897 08:40:55 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:32.897 08:40:55 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:32.897 ************************************ 00:02:32.897 START TEST denied 00:02:32.897 ************************************ 00:02:32.897 08:40:55 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # denied 00:02:32.897 08:40:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED='0000:5f:00.0 0000:5e:00.0' 00:02:32.897 08:40:55 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:32.897 08:40:55 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:32.897 08:40:55 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:32.897 08:40:55 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:02:36.183 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:36.183 08:40:58 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.370 00:02:40.370 real 0m6.702s 00:02:40.370 user 0m2.152s 00:02:40.370 sys 0m3.821s 00:02:40.370 08:41:02 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:40.370 08:41:02 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:40.370 ************************************ 00:02:40.370 END TEST denied 00:02:40.370 ************************************ 00:02:40.370 08:41:02 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:40.370 08:41:02 setup.sh.acl -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:40.370 08:41:02 setup.sh.acl -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:40.370 08:41:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:40.370 ************************************ 00:02:40.370 START TEST allowed 00:02:40.370 ************************************ 00:02:40.371 08:41:02 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # allowed 00:02:40.371 08:41:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:02:40.371 08:41:02 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:40.371 08:41:02 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:02:40.371 08:41:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.371 08:41:02 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:02:43.692 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:43.692 08:41:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:43.692 08:41:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:43.692 08:41:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:43.692 08:41:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.692 08:41:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.020 00:02:47.020 real 0m7.128s 00:02:47.020 user 0m2.296s 00:02:47.020 sys 0m3.993s 00:02:47.020 08:41:09 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:47.020 08:41:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:47.020 ************************************ 00:02:47.020 END TEST allowed 00:02:47.020 ************************************ 00:02:47.020 00:02:47.020 real 0m20.048s 00:02:47.020 user 0m6.813s 00:02:47.020 sys 0m11.838s 00:02:47.020 08:41:09 setup.sh.acl -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:47.020 08:41:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:47.020 ************************************ 00:02:47.020 END TEST acl 00:02:47.020 ************************************ 00:02:47.020 08:41:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.020 08:41:09 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:47.020 08:41:09 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:47.020 08:41:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:47.020 ************************************ 00:02:47.020 START TEST hugepages 00:02:47.020 ************************************ 00:02:47.020 08:41:09 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/hugepages.sh 00:02:47.020 * Looking for test storage... 00:02:47.020 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.020 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 76375744 kB' 'MemAvailable: 79784460 kB' 'Buffers: 2696 kB' 'Cached: 9306712 kB' 'SwapCached: 0 kB' 'Active: 6260128 kB' 'Inactive: 3517220 kB' 'Active(anon): 5874436 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471084 kB' 'Mapped: 184256 kB' 'Shmem: 5406496 kB' 'KReclaimable: 204064 kB' 'Slab: 641556 kB' 'SReclaimable: 204064 kB' 'SUnreclaim: 437492 kB' 'KernelStack: 19552 kB' 'PageTables: 8208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52952944 kB' 'Committed_AS: 7303068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219012 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.021 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:47.022 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:47.023 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.023 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.023 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.023 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.023 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:47.282 08:41:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:47.282 08:41:09 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:47.282 08:41:09 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:47.282 08:41:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.282 ************************************ 00:02:47.282 START TEST default_setup 00:02:47.282 ************************************ 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # default_setup 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.282 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.283 08:41:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:02:49.814 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:02:50.072 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:50.072 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:50.331 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:51.272 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78521408 kB' 'MemAvailable: 81930040 kB' 'Buffers: 2696 kB' 'Cached: 9306832 kB' 'SwapCached: 0 kB' 'Active: 6275592 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889900 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486072 kB' 'Mapped: 183944 kB' 'Shmem: 5406616 kB' 'KReclaimable: 203896 kB' 'Slab: 639800 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435904 kB' 'KernelStack: 19920 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7320676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219172 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.272 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.273 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78521152 kB' 'MemAvailable: 81929784 kB' 'Buffers: 2696 kB' 'Cached: 9306832 kB' 'SwapCached: 0 kB' 'Active: 6275016 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889324 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485924 kB' 'Mapped: 183860 kB' 'Shmem: 5406616 kB' 'KReclaimable: 203896 kB' 'Slab: 639708 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435812 kB' 'KernelStack: 19808 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7320692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219188 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.274 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:51.275 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78519352 kB' 'MemAvailable: 81927984 kB' 'Buffers: 2696 kB' 'Cached: 9306852 kB' 'SwapCached: 0 kB' 'Active: 6275024 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889332 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486436 kB' 'Mapped: 183868 kB' 'Shmem: 5406636 kB' 'KReclaimable: 203896 kB' 'Slab: 639708 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435812 kB' 'KernelStack: 19936 kB' 'PageTables: 9316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7320716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219156 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.276 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.277 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:51.278 nr_hugepages=1024 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.278 resv_hugepages=0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.278 surplus_hugepages=0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.278 anon_hugepages=0 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.278 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78517164 kB' 'MemAvailable: 81925796 kB' 'Buffers: 2696 kB' 'Cached: 9306852 kB' 'SwapCached: 0 kB' 'Active: 6275700 kB' 'Inactive: 3517220 kB' 'Active(anon): 5890008 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486676 kB' 'Mapped: 183852 kB' 'Shmem: 5406636 kB' 'KReclaimable: 203896 kB' 'Slab: 639708 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435812 kB' 'KernelStack: 20112 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7320736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219172 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.279 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.280 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21871980 kB' 'MemUsed: 10762648 kB' 'SwapCached: 0 kB' 'Active: 4376532 kB' 'Inactive: 3365944 kB' 'Active(anon): 4206472 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402036 kB' 'Mapped: 40324 kB' 'AnonPages: 343604 kB' 'Shmem: 3866032 kB' 'KernelStack: 11816 kB' 'PageTables: 6432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111076 kB' 'Slab: 350188 kB' 'SReclaimable: 111076 kB' 'SUnreclaim: 239112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.281 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:51.282 node0=1024 expecting 1024 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:51.282 00:02:51.282 real 0m4.142s 00:02:51.282 user 0m1.373s 00:02:51.282 sys 0m2.003s 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:51.282 08:41:13 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:51.282 ************************************ 00:02:51.282 END TEST default_setup 00:02:51.282 ************************************ 00:02:51.282 08:41:13 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:51.282 08:41:13 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:51.282 08:41:13 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:51.282 08:41:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.541 ************************************ 00:02:51.541 START TEST per_node_1G_alloc 00:02:51.541 ************************************ 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # per_node_1G_alloc 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.541 08:41:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:02:54.076 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:02:54.076 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.076 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:54.338 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:54.338 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:54.338 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78537344 kB' 'MemAvailable: 81945976 kB' 'Buffers: 2696 kB' 'Cached: 9306972 kB' 'SwapCached: 0 kB' 'Active: 6274800 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889108 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485620 kB' 'Mapped: 183860 kB' 'Shmem: 5406756 kB' 'KReclaimable: 203896 kB' 'Slab: 639628 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435732 kB' 'KernelStack: 19648 kB' 'PageTables: 8288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7318600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219108 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.339 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78544472 kB' 'MemAvailable: 81953104 kB' 'Buffers: 2696 kB' 'Cached: 9306976 kB' 'SwapCached: 0 kB' 'Active: 6272228 kB' 'Inactive: 3517220 kB' 'Active(anon): 5886536 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483088 kB' 'Mapped: 182684 kB' 'Shmem: 5406760 kB' 'KReclaimable: 203896 kB' 'Slab: 639620 kB' 'SReclaimable: 203896 kB' 'SUnreclaim: 435724 kB' 'KernelStack: 19584 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7308356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219028 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.340 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.341 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78549632 kB' 'MemAvailable: 81958260 kB' 'Buffers: 2696 kB' 'Cached: 9306996 kB' 'SwapCached: 0 kB' 'Active: 6272232 kB' 'Inactive: 3517220 kB' 'Active(anon): 5886540 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483068 kB' 'Mapped: 182684 kB' 'Shmem: 5406780 kB' 'KReclaimable: 203888 kB' 'Slab: 639712 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 435824 kB' 'KernelStack: 19584 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7308380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219028 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.342 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.343 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.344 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.606 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.606 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.606 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.606 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:54.607 nr_hugepages=1024 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:54.607 resv_hugepages=0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:54.607 surplus_hugepages=0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:54.607 anon_hugepages=0 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78551424 kB' 'MemAvailable: 81960052 kB' 'Buffers: 2696 kB' 'Cached: 9307016 kB' 'SwapCached: 0 kB' 'Active: 6272304 kB' 'Inactive: 3517220 kB' 'Active(anon): 5886612 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483084 kB' 'Mapped: 182684 kB' 'Shmem: 5406800 kB' 'KReclaimable: 203888 kB' 'Slab: 639712 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 435824 kB' 'KernelStack: 19568 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7308404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219028 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.607 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.608 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 22947984 kB' 'MemUsed: 9686644 kB' 'SwapCached: 0 kB' 'Active: 4374672 kB' 'Inactive: 3365944 kB' 'Active(anon): 4204612 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402084 kB' 'Mapped: 39828 kB' 'AnonPages: 341744 kB' 'Shmem: 3866080 kB' 'KernelStack: 11352 kB' 'PageTables: 5052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111076 kB' 'Slab: 349928 kB' 'SReclaimable: 111076 kB' 'SUnreclaim: 238852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.609 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688356 kB' 'MemFree: 55604648 kB' 'MemUsed: 5083708 kB' 'SwapCached: 0 kB' 'Active: 1897232 kB' 'Inactive: 151276 kB' 'Active(anon): 1681600 kB' 'Inactive(anon): 0 kB' 'Active(file): 215632 kB' 'Inactive(file): 151276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1907676 kB' 'Mapped: 142856 kB' 'AnonPages: 140900 kB' 'Shmem: 1540768 kB' 'KernelStack: 8216 kB' 'PageTables: 2920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92812 kB' 'Slab: 289776 kB' 'SReclaimable: 92812 kB' 'SUnreclaim: 196964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.610 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.611 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:54.612 node0=512 expecting 512 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:54.612 node1=512 expecting 512 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:54.612 00:02:54.612 real 0m3.167s 00:02:54.612 user 0m1.316s 00:02:54.612 sys 0m1.912s 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:54.612 08:41:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:54.612 ************************************ 00:02:54.612 END TEST per_node_1G_alloc 00:02:54.612 ************************************ 00:02:54.612 08:41:17 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:54.612 08:41:17 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:54.612 08:41:17 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:54.612 08:41:17 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:54.612 ************************************ 00:02:54.612 START TEST even_2G_alloc 00:02:54.612 ************************************ 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # even_2G_alloc 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:54.612 08:41:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:02:57.148 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:02:57.407 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.407 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:57.407 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.407 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.407 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:02:57.408 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78564884 kB' 'MemAvailable: 81973512 kB' 'Buffers: 2696 kB' 'Cached: 9307132 kB' 'SwapCached: 0 kB' 'Active: 6274184 kB' 'Inactive: 3517220 kB' 'Active(anon): 5888492 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484952 kB' 'Mapped: 182780 kB' 'Shmem: 5406916 kB' 'KReclaimable: 203888 kB' 'Slab: 640000 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 436112 kB' 'KernelStack: 19632 kB' 'PageTables: 8176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7309172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219092 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.673 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.674 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78569732 kB' 'MemAvailable: 81978360 kB' 'Buffers: 2696 kB' 'Cached: 9307132 kB' 'SwapCached: 0 kB' 'Active: 6274200 kB' 'Inactive: 3517220 kB' 'Active(anon): 5888508 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485232 kB' 'Mapped: 182712 kB' 'Shmem: 5406916 kB' 'KReclaimable: 203888 kB' 'Slab: 639976 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 436088 kB' 'KernelStack: 19616 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7309188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.675 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.676 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78569648 kB' 'MemAvailable: 81978276 kB' 'Buffers: 2696 kB' 'Cached: 9307152 kB' 'SwapCached: 0 kB' 'Active: 6273148 kB' 'Inactive: 3517220 kB' 'Active(anon): 5887456 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484132 kB' 'Mapped: 182712 kB' 'Shmem: 5406936 kB' 'KReclaimable: 203888 kB' 'Slab: 640052 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 436164 kB' 'KernelStack: 19568 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7309212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.677 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.678 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:57.679 nr_hugepages=1024 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:57.679 resv_hugepages=0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:57.679 surplus_hugepages=0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:57.679 anon_hugepages=0 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78569648 kB' 'MemAvailable: 81978276 kB' 'Buffers: 2696 kB' 'Cached: 9307172 kB' 'SwapCached: 0 kB' 'Active: 6273148 kB' 'Inactive: 3517220 kB' 'Active(anon): 5887456 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484140 kB' 'Mapped: 182712 kB' 'Shmem: 5406956 kB' 'KReclaimable: 203888 kB' 'Slab: 640052 kB' 'SReclaimable: 203888 kB' 'SUnreclaim: 436164 kB' 'KernelStack: 19584 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7309232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.679 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.680 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 22961428 kB' 'MemUsed: 9673200 kB' 'SwapCached: 0 kB' 'Active: 4375600 kB' 'Inactive: 3365944 kB' 'Active(anon): 4205540 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402108 kB' 'Mapped: 39828 kB' 'AnonPages: 342936 kB' 'Shmem: 3866104 kB' 'KernelStack: 11368 kB' 'PageTables: 5088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111076 kB' 'Slab: 350372 kB' 'SReclaimable: 111076 kB' 'SUnreclaim: 239296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.681 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.682 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688356 kB' 'MemFree: 55608572 kB' 'MemUsed: 5079784 kB' 'SwapCached: 0 kB' 'Active: 1897604 kB' 'Inactive: 151276 kB' 'Active(anon): 1681972 kB' 'Inactive(anon): 0 kB' 'Active(file): 215632 kB' 'Inactive(file): 151276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1907804 kB' 'Mapped: 142884 kB' 'AnonPages: 141204 kB' 'Shmem: 1540896 kB' 'KernelStack: 8216 kB' 'PageTables: 2924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92812 kB' 'Slab: 289680 kB' 'SReclaimable: 92812 kB' 'SUnreclaim: 196868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.683 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:57.684 node0=512 expecting 512 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:57.684 node1=512 expecting 512 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:57.684 00:02:57.684 real 0m3.123s 00:02:57.684 user 0m1.271s 00:02:57.684 sys 0m1.911s 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:02:57.684 08:41:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:57.684 ************************************ 00:02:57.684 END TEST even_2G_alloc 00:02:57.684 ************************************ 00:02:57.684 08:41:20 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:57.684 08:41:20 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:02:57.684 08:41:20 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:02:57.684 08:41:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:57.944 ************************************ 00:02:57.944 START TEST odd_alloc 00:02:57.944 ************************************ 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # odd_alloc 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.944 08:41:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:00.478 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:00.478 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.478 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.478 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.478 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.478 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.478 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.478 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:00.739 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78578044 kB' 'MemAvailable: 81986660 kB' 'Buffers: 2696 kB' 'Cached: 9307288 kB' 'SwapCached: 0 kB' 'Active: 6275352 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889660 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484828 kB' 'Mapped: 182740 kB' 'Shmem: 5407072 kB' 'KReclaimable: 203864 kB' 'Slab: 639424 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435560 kB' 'KernelStack: 19728 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000496 kB' 'Committed_AS: 7311100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219236 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.739 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.740 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78580864 kB' 'MemAvailable: 81989480 kB' 'Buffers: 2696 kB' 'Cached: 9307288 kB' 'SwapCached: 0 kB' 'Active: 6275384 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889692 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485384 kB' 'Mapped: 182816 kB' 'Shmem: 5407072 kB' 'KReclaimable: 203864 kB' 'Slab: 639700 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435836 kB' 'KernelStack: 19696 kB' 'PageTables: 8280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000496 kB' 'Committed_AS: 7312604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219140 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.741 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.742 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78579640 kB' 'MemAvailable: 81988256 kB' 'Buffers: 2696 kB' 'Cached: 9307308 kB' 'SwapCached: 0 kB' 'Active: 6274608 kB' 'Inactive: 3517220 kB' 'Active(anon): 5888916 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485024 kB' 'Mapped: 182740 kB' 'Shmem: 5407092 kB' 'KReclaimable: 203864 kB' 'Slab: 639680 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435816 kB' 'KernelStack: 19616 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000496 kB' 'Committed_AS: 7311136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219236 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.743 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.744 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:00.745 nr_hugepages=1025 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.745 resv_hugepages=0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.745 surplus_hugepages=0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.745 anon_hugepages=0 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.745 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78578756 kB' 'MemAvailable: 81987372 kB' 'Buffers: 2696 kB' 'Cached: 9307328 kB' 'SwapCached: 0 kB' 'Active: 6275076 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889384 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485496 kB' 'Mapped: 182740 kB' 'Shmem: 5407112 kB' 'KReclaimable: 203864 kB' 'Slab: 639616 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435752 kB' 'KernelStack: 19936 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54000496 kB' 'Committed_AS: 7312648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219300 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.006 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.007 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 22971692 kB' 'MemUsed: 9662936 kB' 'SwapCached: 0 kB' 'Active: 4376980 kB' 'Inactive: 3365944 kB' 'Active(anon): 4206920 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402144 kB' 'Mapped: 39836 kB' 'AnonPages: 343916 kB' 'Shmem: 3866140 kB' 'KernelStack: 11720 kB' 'PageTables: 5916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111052 kB' 'Slab: 350032 kB' 'SReclaimable: 111052 kB' 'SUnreclaim: 238980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.008 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688356 kB' 'MemFree: 55606880 kB' 'MemUsed: 5081476 kB' 'SwapCached: 0 kB' 'Active: 1898152 kB' 'Inactive: 151276 kB' 'Active(anon): 1682520 kB' 'Inactive(anon): 0 kB' 'Active(file): 215632 kB' 'Inactive(file): 151276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1907916 kB' 'Mapped: 142904 kB' 'AnonPages: 141588 kB' 'Shmem: 1541008 kB' 'KernelStack: 8200 kB' 'PageTables: 2868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92812 kB' 'Slab: 289584 kB' 'SReclaimable: 92812 kB' 'SUnreclaim: 196772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.009 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:01.010 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:01.011 node0=512 expecting 513 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:01.011 node1=513 expecting 512 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:01.011 00:03:01.011 real 0m3.144s 00:03:01.011 user 0m1.321s 00:03:01.011 sys 0m1.880s 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:01.011 08:41:23 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:01.011 ************************************ 00:03:01.011 END TEST odd_alloc 00:03:01.011 ************************************ 00:03:01.011 08:41:23 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:01.011 08:41:23 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:01.011 08:41:23 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:01.011 08:41:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:01.011 ************************************ 00:03:01.011 START TEST custom_alloc 00:03:01.011 ************************************ 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # custom_alloc 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.011 08:41:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:03.542 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:03.800 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.800 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:03.800 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 77525472 kB' 'MemAvailable: 80934088 kB' 'Buffers: 2696 kB' 'Cached: 9307432 kB' 'SwapCached: 0 kB' 'Active: 6275240 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889548 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485076 kB' 'Mapped: 182816 kB' 'Shmem: 5407216 kB' 'KReclaimable: 203864 kB' 'Slab: 639924 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 436060 kB' 'KernelStack: 19536 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477232 kB' 'Committed_AS: 7310148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.064 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.065 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 77526648 kB' 'MemAvailable: 80935264 kB' 'Buffers: 2696 kB' 'Cached: 9307432 kB' 'SwapCached: 0 kB' 'Active: 6274784 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889092 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485104 kB' 'Mapped: 182812 kB' 'Shmem: 5407216 kB' 'KReclaimable: 203864 kB' 'Slab: 639880 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 436016 kB' 'KernelStack: 19552 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477232 kB' 'Committed_AS: 7310168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218964 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.066 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.067 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 77528188 kB' 'MemAvailable: 80936804 kB' 'Buffers: 2696 kB' 'Cached: 9307456 kB' 'SwapCached: 0 kB' 'Active: 6274492 kB' 'Inactive: 3517220 kB' 'Active(anon): 5888800 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484860 kB' 'Mapped: 182752 kB' 'Shmem: 5407240 kB' 'KReclaimable: 203864 kB' 'Slab: 639928 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 436064 kB' 'KernelStack: 19536 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477232 kB' 'Committed_AS: 7310324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218932 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.068 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.069 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:04.070 nr_hugepages=1536 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.070 resv_hugepages=0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.070 surplus_hugepages=0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.070 anon_hugepages=0 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 77527952 kB' 'MemAvailable: 80936568 kB' 'Buffers: 2696 kB' 'Cached: 9307504 kB' 'SwapCached: 0 kB' 'Active: 6274460 kB' 'Inactive: 3517220 kB' 'Active(anon): 5888768 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484800 kB' 'Mapped: 182752 kB' 'Shmem: 5407288 kB' 'KReclaimable: 203864 kB' 'Slab: 639928 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 436064 kB' 'KernelStack: 19568 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53477232 kB' 'Committed_AS: 7310720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218948 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.070 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.071 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 22973868 kB' 'MemUsed: 9660760 kB' 'SwapCached: 0 kB' 'Active: 4375292 kB' 'Inactive: 3365944 kB' 'Active(anon): 4205232 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402148 kB' 'Mapped: 39828 kB' 'AnonPages: 342328 kB' 'Shmem: 3866144 kB' 'KernelStack: 11352 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111052 kB' 'Slab: 350352 kB' 'SReclaimable: 111052 kB' 'SUnreclaim: 239300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.072 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.073 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60688356 kB' 'MemFree: 54553580 kB' 'MemUsed: 6134776 kB' 'SwapCached: 0 kB' 'Active: 1899840 kB' 'Inactive: 151276 kB' 'Active(anon): 1684208 kB' 'Inactive(anon): 0 kB' 'Active(file): 215632 kB' 'Inactive(file): 151276 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1908076 kB' 'Mapped: 142924 kB' 'AnonPages: 143176 kB' 'Shmem: 1541168 kB' 'KernelStack: 8248 kB' 'PageTables: 3020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92812 kB' 'Slab: 289576 kB' 'SReclaimable: 92812 kB' 'SUnreclaim: 196764 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.074 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:04.075 node0=512 expecting 512 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:04.075 node1=1024 expecting 1024 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:04.075 00:03:04.075 real 0m3.093s 00:03:04.075 user 0m1.307s 00:03:04.075 sys 0m1.847s 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:04.075 08:41:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:04.075 ************************************ 00:03:04.075 END TEST custom_alloc 00:03:04.075 ************************************ 00:03:04.075 08:41:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:04.075 08:41:26 setup.sh.hugepages -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:04.075 08:41:26 setup.sh.hugepages -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:04.075 08:41:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.075 ************************************ 00:03:04.075 START TEST no_shrink_alloc 00:03:04.075 ************************************ 00:03:04.075 08:41:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # no_shrink_alloc 00:03:04.075 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:04.075 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.333 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.334 08:41:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:06.869 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:06.869 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.869 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:06.869 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78539356 kB' 'MemAvailable: 81947972 kB' 'Buffers: 2696 kB' 'Cached: 9307588 kB' 'SwapCached: 0 kB' 'Active: 6275668 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889976 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485788 kB' 'Mapped: 182772 kB' 'Shmem: 5407372 kB' 'KReclaimable: 203864 kB' 'Slab: 639444 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435580 kB' 'KernelStack: 19600 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7311220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219204 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.869 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.870 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78539412 kB' 'MemAvailable: 81948028 kB' 'Buffers: 2696 kB' 'Cached: 9307592 kB' 'SwapCached: 0 kB' 'Active: 6275356 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889664 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485508 kB' 'Mapped: 182768 kB' 'Shmem: 5407376 kB' 'KReclaimable: 203864 kB' 'Slab: 639428 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435564 kB' 'KernelStack: 19584 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7311240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219188 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.871 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.872 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78539752 kB' 'MemAvailable: 81948368 kB' 'Buffers: 2696 kB' 'Cached: 9307608 kB' 'SwapCached: 0 kB' 'Active: 6275352 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889660 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485448 kB' 'Mapped: 182768 kB' 'Shmem: 5407392 kB' 'KReclaimable: 203864 kB' 'Slab: 639420 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435556 kB' 'KernelStack: 19568 kB' 'PageTables: 7964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7311260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219156 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.873 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.133 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:07.134 nr_hugepages=1024 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.134 resv_hugepages=0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.134 surplus_hugepages=0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.134 anon_hugepages=0 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78541268 kB' 'MemAvailable: 81949884 kB' 'Buffers: 2696 kB' 'Cached: 9307628 kB' 'SwapCached: 0 kB' 'Active: 6276044 kB' 'Inactive: 3517220 kB' 'Active(anon): 5890352 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486248 kB' 'Mapped: 182768 kB' 'Shmem: 5407412 kB' 'KReclaimable: 203864 kB' 'Slab: 639420 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435556 kB' 'KernelStack: 19648 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7313608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219156 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.134 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.135 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21896100 kB' 'MemUsed: 10738528 kB' 'SwapCached: 0 kB' 'Active: 4376412 kB' 'Inactive: 3365944 kB' 'Active(anon): 4206352 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402180 kB' 'Mapped: 39828 kB' 'AnonPages: 343348 kB' 'Shmem: 3866176 kB' 'KernelStack: 11352 kB' 'PageTables: 5044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111052 kB' 'Slab: 350016 kB' 'SReclaimable: 111052 kB' 'SUnreclaim: 238964 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.136 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.137 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:07.138 node0=1024 expecting 1024 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.138 08:41:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:09.748 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:09.748 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:09.748 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.748 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:09.748 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:09.748 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:10.011 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:10.011 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78526388 kB' 'MemAvailable: 81935004 kB' 'Buffers: 2696 kB' 'Cached: 9307724 kB' 'SwapCached: 0 kB' 'Active: 6275392 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889700 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485612 kB' 'Mapped: 182848 kB' 'Shmem: 5407508 kB' 'KReclaimable: 203864 kB' 'Slab: 639548 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435684 kB' 'KernelStack: 19568 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7312068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219076 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.011 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.012 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78526964 kB' 'MemAvailable: 81935580 kB' 'Buffers: 2696 kB' 'Cached: 9307728 kB' 'SwapCached: 0 kB' 'Active: 6275260 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889568 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485444 kB' 'Mapped: 182776 kB' 'Shmem: 5407512 kB' 'KReclaimable: 203864 kB' 'Slab: 639412 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435548 kB' 'KernelStack: 19568 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7311720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.013 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.014 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78527280 kB' 'MemAvailable: 81935896 kB' 'Buffers: 2696 kB' 'Cached: 9307728 kB' 'SwapCached: 0 kB' 'Active: 6275396 kB' 'Inactive: 3517220 kB' 'Active(anon): 5889704 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485672 kB' 'Mapped: 182776 kB' 'Shmem: 5407512 kB' 'KReclaimable: 203864 kB' 'Slab: 639492 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435628 kB' 'KernelStack: 19616 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7312108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 219044 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.015 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.016 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.017 nr_hugepages=1024 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.017 resv_hugepages=0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.017 surplus_hugepages=0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.017 anon_hugepages=0 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93322984 kB' 'MemFree: 78525952 kB' 'MemAvailable: 81934568 kB' 'Buffers: 2696 kB' 'Cached: 9307768 kB' 'SwapCached: 0 kB' 'Active: 6275756 kB' 'Inactive: 3517220 kB' 'Active(anon): 5890064 kB' 'Inactive(anon): 0 kB' 'Active(file): 385692 kB' 'Inactive(file): 3517220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486000 kB' 'Mapped: 182784 kB' 'Shmem: 5407552 kB' 'KReclaimable: 203864 kB' 'Slab: 639476 kB' 'SReclaimable: 203864 kB' 'SUnreclaim: 435612 kB' 'KernelStack: 19584 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 54001520 kB' 'Committed_AS: 7313412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218996 kB' 'VmallocChunk: 0 kB' 'Percpu: 63744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1332180 kB' 'DirectMap2M: 18270208 kB' 'DirectMap1G: 82837504 kB' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.017 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.018 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.019 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.278 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32634628 kB' 'MemFree: 21890840 kB' 'MemUsed: 10743788 kB' 'SwapCached: 0 kB' 'Active: 4377420 kB' 'Inactive: 3365944 kB' 'Active(anon): 4207360 kB' 'Inactive(anon): 0 kB' 'Active(file): 170060 kB' 'Inactive(file): 3365944 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7402204 kB' 'Mapped: 39836 kB' 'AnonPages: 344524 kB' 'Shmem: 3866200 kB' 'KernelStack: 11384 kB' 'PageTables: 5296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111052 kB' 'Slab: 350040 kB' 'SReclaimable: 111052 kB' 'SUnreclaim: 238988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.279 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.280 node0=1024 expecting 1024 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.280 00:03:10.280 real 0m5.986s 00:03:10.280 user 0m2.362s 00:03:10.280 sys 0m3.708s 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:10.280 08:41:32 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.280 ************************************ 00:03:10.280 END TEST no_shrink_alloc 00:03:10.280 ************************************ 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:10.280 08:41:32 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:10.280 00:03:10.280 real 0m23.203s 00:03:10.280 user 0m9.188s 00:03:10.280 sys 0m13.610s 00:03:10.280 08:41:32 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:10.280 08:41:32 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.280 ************************************ 00:03:10.280 END TEST hugepages 00:03:10.280 ************************************ 00:03:10.280 08:41:32 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:10.280 08:41:32 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:10.280 08:41:32 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:10.280 08:41:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:10.280 ************************************ 00:03:10.280 START TEST driver 00:03:10.280 ************************************ 00:03:10.280 08:41:32 setup.sh.driver -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/driver.sh 00:03:10.280 * Looking for test storage... 00:03:10.280 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:10.280 08:41:32 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:10.280 08:41:32 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.280 08:41:32 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.464 08:41:36 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:14.464 08:41:36 setup.sh.driver -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:14.464 08:41:36 setup.sh.driver -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:14.464 08:41:36 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.464 ************************************ 00:03:14.464 START TEST guess_driver 00:03:14.464 ************************************ 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # guess_driver 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 172 > 0 )) 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:14.464 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:14.464 Looking for driver=vfio-pci 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.464 08:41:36 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:16.999 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ denied == \-\> ]] 00:03:16.999 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:16.999 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.258 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.517 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.518 08:41:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.204 08:41:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:18.204 08:41:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:18.204 08:41:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:18.462 08:41:40 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:18.462 08:41:40 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:18.462 08:41:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.462 08:41:40 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.652 00:03:22.652 real 0m7.950s 00:03:22.652 user 0m2.377s 00:03:22.652 sys 0m4.060s 00:03:22.652 08:41:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:22.652 08:41:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:22.652 ************************************ 00:03:22.652 END TEST guess_driver 00:03:22.652 ************************************ 00:03:22.652 00:03:22.652 real 0m12.172s 00:03:22.652 user 0m3.639s 00:03:22.652 sys 0m6.274s 00:03:22.652 08:41:44 setup.sh.driver -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:22.652 08:41:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:22.652 ************************************ 00:03:22.652 END TEST driver 00:03:22.652 ************************************ 00:03:22.652 08:41:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:03:22.652 08:41:44 setup.sh -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:22.652 08:41:44 setup.sh -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:22.652 08:41:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.652 ************************************ 00:03:22.652 START TEST devices 00:03:22.652 ************************************ 00:03:22.652 08:41:44 setup.sh.devices -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/devices.sh 00:03:22.652 * Looking for test storage... 00:03:22.652 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup 00:03:22.652 08:41:45 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:22.652 08:41:45 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:22.652 08:41:45 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:22.652 08:41:45 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.940 08:41:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:25.940 08:41:48 setup.sh.devices -- common/autotest_common.sh@1668 -- # zoned_devs=() 00:03:25.940 08:41:48 setup.sh.devices -- common/autotest_common.sh@1668 -- # local -gA zoned_devs 00:03:25.940 08:41:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # local nvme bdf 00:03:25.940 08:41:48 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n1 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n1 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ none != none ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1671 -- # for nvme in /sys/block/nvme* 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # is_block_zoned nvme1n2 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1661 -- # local device=nvme1n2 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ host-managed != none ]] 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # zoned_devs["${nvme##*/}"]=0000:5f:00.0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:25.941 08:41:48 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:25.941 No valid GPT data, bailing 00:03:25.941 08:41:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:25.941 08:41:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:25.941 08:41:48 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ 0000:5f:00.0 == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@203 -- # continue 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:25.941 08:41:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:25.941 ************************************ 00:03:25.941 START TEST nvme_mount 00:03:25.941 ************************************ 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # nvme_mount 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:25.941 08:41:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:26.877 Creating new GPT entries in memory. 00:03:26.877 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:26.877 other utilities. 00:03:26.877 08:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:26.877 08:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.877 08:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:26.877 08:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:26.877 08:41:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:27.813 Creating new GPT entries in memory. 00:03:27.813 The operation has completed successfully. 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1128986 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:27.813 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.072 08:41:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.604 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:30.605 08:41:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:30.605 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:30.605 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:30.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:30.863 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:30.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:30.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:30.863 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.121 08:41:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.647 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:33.648 08:41:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:33.648 08:41:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:36.181 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.181 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.439 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:36.440 08:41:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:36.699 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:36.699 00:03:36.699 real 0m10.811s 00:03:36.699 user 0m3.170s 00:03:36.699 sys 0m5.385s 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:36.699 08:41:59 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:36.699 ************************************ 00:03:36.699 END TEST nvme_mount 00:03:36.699 ************************************ 00:03:36.699 08:41:59 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:36.699 08:41:59 setup.sh.devices -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:03:36.699 08:41:59 setup.sh.devices -- common/autotest_common.sh@1106 -- # xtrace_disable 00:03:36.699 08:41:59 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:36.699 ************************************ 00:03:36.699 START TEST dm_mount 00:03:36.699 ************************************ 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # dm_mount 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:36.699 08:41:59 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:37.634 Creating new GPT entries in memory. 00:03:37.634 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:37.634 other utilities. 00:03:37.634 08:42:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:37.634 08:42:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:37.634 08:42:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:37.634 08:42:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:37.634 08:42:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:38.651 Creating new GPT entries in memory. 00:03:38.651 The operation has completed successfully. 00:03:38.651 08:42:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:38.651 08:42:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.651 08:42:01 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.651 08:42:01 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.651 08:42:01 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:40.028 The operation has completed successfully. 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1133212 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount size= 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:40.028 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.029 08:42:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:41.929 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:41.929 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.498 08:42:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh config 00:03:45.030 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.030 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:45.288 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.546 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:45.547 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:45.547 00:03:45.547 real 0m8.858s 00:03:45.547 user 0m2.095s 00:03:45.547 sys 0m3.645s 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:45.547 08:42:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:45.547 ************************************ 00:03:45.547 END TEST dm_mount 00:03:45.547 ************************************ 00:03:45.547 08:42:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:45.547 08:42:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:45.547 08:42:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/nvme_mount 00:03:45.547 08:42:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.547 08:42:08 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.547 08:42:08 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.547 08:42:08 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.805 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:45.805 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:45.805 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.805 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/setup/dm_mount 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.805 08:42:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:45.805 00:03:45.805 real 0m23.331s 00:03:45.805 user 0m6.532s 00:03:45.805 sys 0m11.275s 00:03:45.805 08:42:08 setup.sh.devices -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:45.805 08:42:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:45.805 ************************************ 00:03:45.805 END TEST devices 00:03:45.805 ************************************ 00:03:45.805 00:03:45.805 real 1m19.113s 00:03:45.805 user 0m26.317s 00:03:45.805 sys 0m43.237s 00:03:45.805 08:42:08 setup.sh -- common/autotest_common.sh@1125 -- # xtrace_disable 00:03:45.805 08:42:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.805 ************************************ 00:03:45.805 END TEST setup.sh 00:03:45.805 ************************************ 00:03:45.805 08:42:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh status 00:03:49.092 Hugepages 00:03:49.092 node hugesize free / total 00:03:49.092 node0 1048576kB 0 / 0 00:03:49.092 node0 2048kB 2048 / 2048 00:03:49.092 node1 1048576kB 0 / 0 00:03:49.092 node1 2048kB 0 / 0 00:03:49.092 00:03:49.092 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:49.092 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:49.092 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:49.092 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:49.093 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:49.093 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:49.093 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:49.093 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:49.093 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:49.093 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:49.093 NVMe 0000:5f:00.0 1b96 2600 0 nvme nvme1 nvme1n1 nvme1n2 00:03:49.093 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:49.093 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:49.093 08:42:11 -- spdk/autotest.sh@130 -- # uname -s 00:03:49.093 08:42:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:49.093 08:42:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:49.093 08:42:11 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:03:51.630 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:51.630 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.630 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.567 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.567 08:42:15 -- common/autotest_common.sh@1531 -- # sleep 1 00:03:53.945 08:42:16 -- common/autotest_common.sh@1532 -- # bdfs=() 00:03:53.945 08:42:16 -- common/autotest_common.sh@1532 -- # local bdfs 00:03:53.945 08:42:16 -- common/autotest_common.sh@1533 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.945 08:42:16 -- common/autotest_common.sh@1533 -- # get_nvme_bdfs 00:03:53.945 08:42:16 -- common/autotest_common.sh@1512 -- # bdfs=() 00:03:53.945 08:42:16 -- common/autotest_common.sh@1512 -- # local bdfs 00:03:53.945 08:42:16 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.945 08:42:16 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:03:53.945 08:42:16 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.945 08:42:16 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:03:53.945 08:42:16 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:03:53.945 08:42:16 -- common/autotest_common.sh@1535 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.476 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:03:56.476 Waiting for block devices as requested 00:03:56.476 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:56.476 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:56.735 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:56.735 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:56.735 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:56.735 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:56.994 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:56.994 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:56.994 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:56.994 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:57.254 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:57.254 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:57.254 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:57.512 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:57.512 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:57.512 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:57.771 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:57.771 08:42:20 -- common/autotest_common.sh@1537 -- # for bdf in "${bdfs[@]}" 00:03:57.771 08:42:20 -- common/autotest_common.sh@1538 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1501 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:57.771 08:42:20 -- common/autotest_common.sh@1501 -- # grep 0000:5e:00.0/nvme/nvme 00:03:57.771 08:42:20 -- common/autotest_common.sh@1501 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1502 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:57.771 08:42:20 -- common/autotest_common.sh@1506 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1506 -- # printf '%s\n' nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1538 -- # nvme_ctrlr=/dev/nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1539 -- # [[ -z /dev/nvme0 ]] 00:03:57.771 08:42:20 -- common/autotest_common.sh@1544 -- # nvme id-ctrl /dev/nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1544 -- # grep oacs 00:03:57.771 08:42:20 -- common/autotest_common.sh@1544 -- # cut -d: -f2 00:03:57.771 08:42:20 -- common/autotest_common.sh@1544 -- # oacs=' 0xf' 00:03:57.771 08:42:20 -- common/autotest_common.sh@1545 -- # oacs_ns_manage=8 00:03:57.771 08:42:20 -- common/autotest_common.sh@1547 -- # [[ 8 -ne 0 ]] 00:03:57.771 08:42:20 -- common/autotest_common.sh@1553 -- # nvme id-ctrl /dev/nvme0 00:03:57.771 08:42:20 -- common/autotest_common.sh@1553 -- # grep unvmcap 00:03:57.771 08:42:20 -- common/autotest_common.sh@1553 -- # cut -d: -f2 00:03:57.771 08:42:20 -- common/autotest_common.sh@1553 -- # unvmcap=' 0' 00:03:57.771 08:42:20 -- common/autotest_common.sh@1554 -- # [[ 0 -eq 0 ]] 00:03:57.771 08:42:20 -- common/autotest_common.sh@1556 -- # continue 00:03:57.771 08:42:20 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:57.771 08:42:20 -- common/autotest_common.sh@729 -- # xtrace_disable 00:03:57.771 08:42:20 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 08:42:20 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:57.771 08:42:20 -- common/autotest_common.sh@723 -- # xtrace_disable 00:03:57.771 08:42:20 -- common/autotest_common.sh@10 -- # set +x 00:03:57.771 08:42:20 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/setup.sh 00:04:00.302 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:04:00.302 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:00.302 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.236 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.236 08:42:23 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:01.236 08:42:23 -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:01.236 08:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.236 08:42:23 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:01.236 08:42:23 -- common/autotest_common.sh@1590 -- # mapfile -t bdfs 00:04:01.236 08:42:23 -- common/autotest_common.sh@1590 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.236 08:42:23 -- common/autotest_common.sh@1576 -- # bdfs=() 00:04:01.236 08:42:23 -- common/autotest_common.sh@1576 -- # local bdfs 00:04:01.236 08:42:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs 00:04:01.236 08:42:23 -- common/autotest_common.sh@1512 -- # bdfs=() 00:04:01.236 08:42:23 -- common/autotest_common.sh@1512 -- # local bdfs 00:04:01.551 08:42:23 -- common/autotest_common.sh@1513 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.551 08:42:23 -- common/autotest_common.sh@1513 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:01.551 08:42:23 -- common/autotest_common.sh@1513 -- # jq -r '.config[].params.traddr' 00:04:01.551 08:42:23 -- common/autotest_common.sh@1514 -- # (( 1 == 0 )) 00:04:01.551 08:42:23 -- common/autotest_common.sh@1518 -- # printf '%s\n' 0000:5e:00.0 00:04:01.551 08:42:23 -- common/autotest_common.sh@1578 -- # for bdf in $(get_nvme_bdfs) 00:04:01.551 08:42:23 -- common/autotest_common.sh@1579 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:01.551 08:42:23 -- common/autotest_common.sh@1579 -- # device=0x0a54 00:04:01.551 08:42:23 -- common/autotest_common.sh@1580 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:01.551 08:42:23 -- common/autotest_common.sh@1581 -- # bdfs+=($bdf) 00:04:01.551 08:42:23 -- common/autotest_common.sh@1585 -- # printf '%s\n' 0000:5e:00.0 00:04:01.551 08:42:23 -- common/autotest_common.sh@1591 -- # [[ -z 0000:5e:00.0 ]] 00:04:01.551 08:42:23 -- common/autotest_common.sh@1596 -- # spdk_tgt_pid=1142865 00:04:01.551 08:42:23 -- common/autotest_common.sh@1597 -- # waitforlisten 1142865 00:04:01.551 08:42:23 -- common/autotest_common.sh@1595 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.551 08:42:23 -- common/autotest_common.sh@830 -- # '[' -z 1142865 ']' 00:04:01.551 08:42:23 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.551 08:42:23 -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:01.551 08:42:23 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.551 08:42:23 -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:01.551 08:42:23 -- common/autotest_common.sh@10 -- # set +x 00:04:01.551 [2024-06-09 08:42:23.929467] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:01.551 [2024-06-09 08:42:23.929515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1142865 ] 00:04:01.551 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.551 [2024-06-09 08:42:23.985433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.551 [2024-06-09 08:42:24.055984] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.485 08:42:24 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:02.485 08:42:24 -- common/autotest_common.sh@863 -- # return 0 00:04:02.485 08:42:24 -- common/autotest_common.sh@1599 -- # bdf_id=0 00:04:02.485 08:42:24 -- common/autotest_common.sh@1600 -- # for bdf in "${bdfs[@]}" 00:04:02.485 08:42:24 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:05.824 nvme0n1 00:04:05.824 08:42:27 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:05.824 [2024-06-09 08:42:27.860940] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:05.824 [2024-06-09 08:42:27.860973] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:05.824 request: 00:04:05.824 { 00:04:05.824 "nvme_ctrlr_name": "nvme0", 00:04:05.824 "password": "test", 00:04:05.824 "method": "bdev_nvme_opal_revert", 00:04:05.824 "req_id": 1 00:04:05.824 } 00:04:05.824 Got JSON-RPC error response 00:04:05.824 response: 00:04:05.824 { 00:04:05.824 "code": -32603, 00:04:05.824 "message": "Internal error" 00:04:05.824 } 00:04:05.824 08:42:27 -- common/autotest_common.sh@1603 -- # true 00:04:05.824 08:42:27 -- common/autotest_common.sh@1604 -- # (( ++bdf_id )) 00:04:05.824 08:42:27 -- common/autotest_common.sh@1607 -- # killprocess 1142865 00:04:05.824 08:42:27 -- common/autotest_common.sh@949 -- # '[' -z 1142865 ']' 00:04:05.824 08:42:27 -- common/autotest_common.sh@953 -- # kill -0 1142865 00:04:05.824 08:42:27 -- common/autotest_common.sh@954 -- # uname 00:04:05.824 08:42:27 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:05.824 08:42:27 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1142865 00:04:05.824 08:42:27 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:05.824 08:42:27 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:05.824 08:42:27 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1142865' 00:04:05.824 killing process with pid 1142865 00:04:05.824 08:42:27 -- common/autotest_common.sh@968 -- # kill 1142865 00:04:05.824 08:42:27 -- common/autotest_common.sh@973 -- # wait 1142865 00:04:07.199 08:42:29 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:07.199 08:42:29 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:07.199 08:42:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.199 08:42:29 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:07.199 08:42:29 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:07.199 08:42:29 -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:07.199 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.199 08:42:29 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:07.199 08:42:29 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:07.199 08:42:29 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.199 08:42:29 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.199 08:42:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.199 ************************************ 00:04:07.199 START TEST env 00:04:07.199 ************************************ 00:04:07.199 08:42:29 env -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env.sh 00:04:07.199 * Looking for test storage... 00:04:07.199 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env 00:04:07.199 08:42:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.199 08:42:29 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.199 08:42:29 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.199 08:42:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.199 ************************************ 00:04:07.199 START TEST env_memory 00:04:07.199 ************************************ 00:04:07.199 08:42:29 env.env_memory -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.199 00:04:07.199 00:04:07.199 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.199 http://cunit.sourceforge.net/ 00:04:07.199 00:04:07.199 00:04:07.199 Suite: memory 00:04:07.199 Test: alloc and free memory map ...[2024-06-09 08:42:29.707697] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.199 passed 00:04:07.199 Test: mem map translation ...[2024-06-09 08:42:29.726176] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.199 [2024-06-09 08:42:29.726192] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.199 [2024-06-09 08:42:29.726225] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.199 [2024-06-09 08:42:29.726231] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.199 passed 00:04:07.460 Test: mem map registration ...[2024-06-09 08:42:29.762741] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:07.460 [2024-06-09 08:42:29.762756] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:07.460 passed 00:04:07.460 Test: mem map adjacent registrations ...passed 00:04:07.460 00:04:07.460 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.460 suites 1 1 n/a 0 0 00:04:07.460 tests 4 4 4 0 0 00:04:07.460 asserts 152 152 152 0 n/a 00:04:07.460 00:04:07.460 Elapsed time = 0.126 seconds 00:04:07.460 00:04:07.460 real 0m0.132s 00:04:07.460 user 0m0.126s 00:04:07.460 sys 0m0.006s 00:04:07.460 08:42:29 env.env_memory -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:07.460 08:42:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.460 ************************************ 00:04:07.460 END TEST env_memory 00:04:07.460 ************************************ 00:04:07.460 08:42:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.460 08:42:29 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:07.460 08:42:29 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:07.460 08:42:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.460 ************************************ 00:04:07.460 START TEST env_vtophys 00:04:07.460 ************************************ 00:04:07.460 08:42:29 env.env_vtophys -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.460 EAL: lib.eal log level changed from notice to debug 00:04:07.460 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.460 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.460 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.460 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.460 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.460 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.460 EAL: Detected lcore 6 as core 6 on socket 0 00:04:07.460 EAL: Detected lcore 7 as core 8 on socket 0 00:04:07.460 EAL: Detected lcore 8 as core 9 on socket 0 00:04:07.460 EAL: Detected lcore 9 as core 10 on socket 0 00:04:07.460 EAL: Detected lcore 10 as core 11 on socket 0 00:04:07.460 EAL: Detected lcore 11 as core 12 on socket 0 00:04:07.460 EAL: Detected lcore 12 as core 13 on socket 0 00:04:07.460 EAL: Detected lcore 13 as core 16 on socket 0 00:04:07.460 EAL: Detected lcore 14 as core 17 on socket 0 00:04:07.460 EAL: Detected lcore 15 as core 18 on socket 0 00:04:07.460 EAL: Detected lcore 16 as core 19 on socket 0 00:04:07.460 EAL: Detected lcore 17 as core 20 on socket 0 00:04:07.460 EAL: Detected lcore 18 as core 21 on socket 0 00:04:07.460 EAL: Detected lcore 19 as core 25 on socket 0 00:04:07.460 EAL: Detected lcore 20 as core 26 on socket 0 00:04:07.460 EAL: Detected lcore 21 as core 27 on socket 0 00:04:07.460 EAL: Detected lcore 22 as core 28 on socket 0 00:04:07.460 EAL: Detected lcore 23 as core 29 on socket 0 00:04:07.460 EAL: Detected lcore 24 as core 0 on socket 1 00:04:07.460 EAL: Detected lcore 25 as core 1 on socket 1 00:04:07.460 EAL: Detected lcore 26 as core 2 on socket 1 00:04:07.460 EAL: Detected lcore 27 as core 3 on socket 1 00:04:07.460 EAL: Detected lcore 28 as core 4 on socket 1 00:04:07.460 EAL: Detected lcore 29 as core 5 on socket 1 00:04:07.460 EAL: Detected lcore 30 as core 6 on socket 1 00:04:07.460 EAL: Detected lcore 31 as core 8 on socket 1 00:04:07.460 EAL: Detected lcore 32 as core 9 on socket 1 00:04:07.460 EAL: Detected lcore 33 as core 10 on socket 1 00:04:07.460 EAL: Detected lcore 34 as core 11 on socket 1 00:04:07.460 EAL: Detected lcore 35 as core 12 on socket 1 00:04:07.460 EAL: Detected lcore 36 as core 13 on socket 1 00:04:07.460 EAL: Detected lcore 37 as core 16 on socket 1 00:04:07.460 EAL: Detected lcore 38 as core 17 on socket 1 00:04:07.460 EAL: Detected lcore 39 as core 18 on socket 1 00:04:07.460 EAL: Detected lcore 40 as core 19 on socket 1 00:04:07.460 EAL: Detected lcore 41 as core 20 on socket 1 00:04:07.460 EAL: Detected lcore 42 as core 21 on socket 1 00:04:07.460 EAL: Detected lcore 43 as core 25 on socket 1 00:04:07.460 EAL: Detected lcore 44 as core 26 on socket 1 00:04:07.460 EAL: Detected lcore 45 as core 27 on socket 1 00:04:07.460 EAL: Detected lcore 46 as core 28 on socket 1 00:04:07.460 EAL: Detected lcore 47 as core 29 on socket 1 00:04:07.460 EAL: Detected lcore 48 as core 0 on socket 0 00:04:07.460 EAL: Detected lcore 49 as core 1 on socket 0 00:04:07.460 EAL: Detected lcore 50 as core 2 on socket 0 00:04:07.460 EAL: Detected lcore 51 as core 3 on socket 0 00:04:07.460 EAL: Detected lcore 52 as core 4 on socket 0 00:04:07.460 EAL: Detected lcore 53 as core 5 on socket 0 00:04:07.460 EAL: Detected lcore 54 as core 6 on socket 0 00:04:07.460 EAL: Detected lcore 55 as core 8 on socket 0 00:04:07.460 EAL: Detected lcore 56 as core 9 on socket 0 00:04:07.460 EAL: Detected lcore 57 as core 10 on socket 0 00:04:07.460 EAL: Detected lcore 58 as core 11 on socket 0 00:04:07.460 EAL: Detected lcore 59 as core 12 on socket 0 00:04:07.460 EAL: Detected lcore 60 as core 13 on socket 0 00:04:07.460 EAL: Detected lcore 61 as core 16 on socket 0 00:04:07.460 EAL: Detected lcore 62 as core 17 on socket 0 00:04:07.460 EAL: Detected lcore 63 as core 18 on socket 0 00:04:07.460 EAL: Detected lcore 64 as core 19 on socket 0 00:04:07.460 EAL: Detected lcore 65 as core 20 on socket 0 00:04:07.460 EAL: Detected lcore 66 as core 21 on socket 0 00:04:07.460 EAL: Detected lcore 67 as core 25 on socket 0 00:04:07.460 EAL: Detected lcore 68 as core 26 on socket 0 00:04:07.460 EAL: Detected lcore 69 as core 27 on socket 0 00:04:07.460 EAL: Detected lcore 70 as core 28 on socket 0 00:04:07.460 EAL: Detected lcore 71 as core 29 on socket 0 00:04:07.460 EAL: Detected lcore 72 as core 0 on socket 1 00:04:07.460 EAL: Detected lcore 73 as core 1 on socket 1 00:04:07.460 EAL: Detected lcore 74 as core 2 on socket 1 00:04:07.460 EAL: Detected lcore 75 as core 3 on socket 1 00:04:07.460 EAL: Detected lcore 76 as core 4 on socket 1 00:04:07.460 EAL: Detected lcore 77 as core 5 on socket 1 00:04:07.460 EAL: Detected lcore 78 as core 6 on socket 1 00:04:07.460 EAL: Detected lcore 79 as core 8 on socket 1 00:04:07.460 EAL: Detected lcore 80 as core 9 on socket 1 00:04:07.460 EAL: Detected lcore 81 as core 10 on socket 1 00:04:07.460 EAL: Detected lcore 82 as core 11 on socket 1 00:04:07.460 EAL: Detected lcore 83 as core 12 on socket 1 00:04:07.460 EAL: Detected lcore 84 as core 13 on socket 1 00:04:07.460 EAL: Detected lcore 85 as core 16 on socket 1 00:04:07.460 EAL: Detected lcore 86 as core 17 on socket 1 00:04:07.460 EAL: Detected lcore 87 as core 18 on socket 1 00:04:07.460 EAL: Detected lcore 88 as core 19 on socket 1 00:04:07.460 EAL: Detected lcore 89 as core 20 on socket 1 00:04:07.460 EAL: Detected lcore 90 as core 21 on socket 1 00:04:07.460 EAL: Detected lcore 91 as core 25 on socket 1 00:04:07.460 EAL: Detected lcore 92 as core 26 on socket 1 00:04:07.460 EAL: Detected lcore 93 as core 27 on socket 1 00:04:07.460 EAL: Detected lcore 94 as core 28 on socket 1 00:04:07.460 EAL: Detected lcore 95 as core 29 on socket 1 00:04:07.460 EAL: Maximum logical cores by configuration: 128 00:04:07.460 EAL: Detected CPU lcores: 96 00:04:07.460 EAL: Detected NUMA nodes: 2 00:04:07.460 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.460 EAL: Detected shared linkage of DPDK 00:04:07.460 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.460 EAL: Bus pci wants IOVA as 'DC' 00:04:07.460 EAL: Buses did not request a specific IOVA mode. 00:04:07.460 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.460 EAL: Selected IOVA mode 'VA' 00:04:07.460 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.460 EAL: Probing VFIO support... 00:04:07.460 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.460 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.460 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.460 EAL: VFIO support initialized 00:04:07.460 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.460 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.460 EAL: Setting up physically contiguous memory... 00:04:07.460 EAL: Setting maximum number of open files to 524288 00:04:07.460 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.460 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.460 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.460 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.460 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.460 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.460 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.460 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.460 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.460 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.461 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.461 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.461 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.461 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.461 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.461 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.461 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.461 EAL: Hugepages will be freed exactly as allocated. 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: TSC frequency is ~2100000 KHz 00:04:07.461 EAL: Main lcore 0 is ready (tid=7f1e5dba9a00;cpuset=[0]) 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 0 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.461 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.461 00:04:07.461 00:04:07.461 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.461 http://cunit.sourceforge.net/ 00:04:07.461 00:04:07.461 00:04:07.461 Suite: components_suite 00:04:07.461 Test: vtophys_malloc_test ...passed 00:04:07.461 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.461 EAL: Trying to obtain current memory policy. 00:04:07.461 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.461 EAL: Restoring previous memory policy: 4 00:04:07.461 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.461 EAL: request: mp_malloc_sync 00:04:07.461 EAL: No shared files mode enabled, IPC is disabled 00:04:07.461 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.720 EAL: request: mp_malloc_sync 00:04:07.720 EAL: No shared files mode enabled, IPC is disabled 00:04:07.720 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.720 EAL: Trying to obtain current memory policy. 00:04:07.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.720 EAL: Restoring previous memory policy: 4 00:04:07.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.720 EAL: request: mp_malloc_sync 00:04:07.720 EAL: No shared files mode enabled, IPC is disabled 00:04:07.720 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.720 EAL: request: mp_malloc_sync 00:04:07.720 EAL: No shared files mode enabled, IPC is disabled 00:04:07.720 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.720 EAL: Trying to obtain current memory policy. 00:04:07.720 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.720 EAL: Restoring previous memory policy: 4 00:04:07.720 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.720 EAL: request: mp_malloc_sync 00:04:07.720 EAL: No shared files mode enabled, IPC is disabled 00:04:07.720 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.977 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.977 EAL: request: mp_malloc_sync 00:04:07.977 EAL: No shared files mode enabled, IPC is disabled 00:04:07.977 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.977 EAL: Trying to obtain current memory policy. 00:04:07.977 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.235 EAL: Restoring previous memory policy: 4 00:04:08.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.236 EAL: request: mp_malloc_sync 00:04:08.236 EAL: No shared files mode enabled, IPC is disabled 00:04:08.236 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.494 EAL: request: mp_malloc_sync 00:04:08.494 EAL: No shared files mode enabled, IPC is disabled 00:04:08.494 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.494 passed 00:04:08.494 00:04:08.494 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.494 suites 1 1 n/a 0 0 00:04:08.494 tests 2 2 2 0 0 00:04:08.494 asserts 497 497 497 0 n/a 00:04:08.494 00:04:08.494 Elapsed time = 0.963 seconds 00:04:08.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.494 EAL: request: mp_malloc_sync 00:04:08.494 EAL: No shared files mode enabled, IPC is disabled 00:04:08.494 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.494 EAL: No shared files mode enabled, IPC is disabled 00:04:08.494 EAL: No shared files mode enabled, IPC is disabled 00:04:08.494 EAL: No shared files mode enabled, IPC is disabled 00:04:08.494 00:04:08.494 real 0m1.065s 00:04:08.494 user 0m0.629s 00:04:08.494 sys 0m0.411s 00:04:08.494 08:42:30 env.env_vtophys -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:08.494 08:42:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.494 ************************************ 00:04:08.494 END TEST env_vtophys 00:04:08.494 ************************************ 00:04:08.494 08:42:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.494 08:42:30 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:08.494 08:42:30 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:08.494 08:42:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.494 ************************************ 00:04:08.494 START TEST env_pci 00:04:08.494 ************************************ 00:04:08.494 08:42:31 env.env_pci -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.494 00:04:08.494 00:04:08.494 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.494 http://cunit.sourceforge.net/ 00:04:08.494 00:04:08.494 00:04:08.494 Suite: pci 00:04:08.494 Test: pci_hook ...[2024-06-09 08:42:31.021511] /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1144138 has claimed it 00:04:08.494 EAL: Cannot find device (10000:00:01.0) 00:04:08.494 EAL: Failed to attach device on primary process 00:04:08.494 passed 00:04:08.494 00:04:08.494 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.494 suites 1 1 n/a 0 0 00:04:08.494 tests 1 1 1 0 0 00:04:08.494 asserts 25 25 25 0 n/a 00:04:08.494 00:04:08.494 Elapsed time = 0.026 seconds 00:04:08.494 00:04:08.494 real 0m0.045s 00:04:08.494 user 0m0.017s 00:04:08.494 sys 0m0.028s 00:04:08.494 08:42:31 env.env_pci -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:08.494 08:42:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.494 ************************************ 00:04:08.494 END TEST env_pci 00:04:08.494 ************************************ 00:04:08.754 08:42:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.754 08:42:31 env -- env/env.sh@15 -- # uname 00:04:08.754 08:42:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.754 08:42:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.754 08:42:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.754 08:42:31 env -- common/autotest_common.sh@1100 -- # '[' 5 -le 1 ']' 00:04:08.754 08:42:31 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:08.754 08:42:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.754 ************************************ 00:04:08.754 START TEST env_dpdk_post_init 00:04:08.754 ************************************ 00:04:08.754 08:42:31 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.754 EAL: Detected CPU lcores: 96 00:04:08.754 EAL: Detected NUMA nodes: 2 00:04:08.754 EAL: Detected shared linkage of DPDK 00:04:08.754 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.754 EAL: Selected IOVA mode 'VA' 00:04:08.754 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.754 EAL: VFIO support initialized 00:04:08.754 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.754 EAL: Using IOMMU type 1 (Type 1) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:08.754 EAL: Ignore mapping IO port bar(1) 00:04:08.754 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:09.012 EAL: Ignore mapping IO port bar(1) 00:04:09.012 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:09.012 EAL: Ignore mapping IO port bar(1) 00:04:09.012 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:09.581 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:09.581 EAL: Ignore mapping IO port bar(1) 00:04:09.581 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:09.581 EAL: Ignore mapping IO port bar(1) 00:04:09.581 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:09.581 EAL: Ignore mapping IO port bar(1) 00:04:09.581 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:09.581 EAL: Ignore mapping IO port bar(1) 00:04:09.581 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:09.581 EAL: Ignore mapping IO port bar(1) 00:04:09.581 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:09.839 EAL: Ignore mapping IO port bar(1) 00:04:09.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:09.839 EAL: Ignore mapping IO port bar(1) 00:04:09.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:09.839 EAL: Ignore mapping IO port bar(1) 00:04:09.839 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:13.116 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:13.116 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:13.116 Starting DPDK initialization... 00:04:13.116 Starting SPDK post initialization... 00:04:13.116 SPDK NVMe probe 00:04:13.116 Attaching to 0000:5e:00.0 00:04:13.116 Attached to 0000:5e:00.0 00:04:13.116 Cleaning up... 00:04:13.116 00:04:13.116 real 0m4.348s 00:04:13.116 user 0m3.290s 00:04:13.116 sys 0m0.126s 00:04:13.116 08:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.116 08:42:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:13.116 ************************************ 00:04:13.116 END TEST env_dpdk_post_init 00:04:13.116 ************************************ 00:04:13.116 08:42:35 env -- env/env.sh@26 -- # uname 00:04:13.116 08:42:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:13.116 08:42:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.116 08:42:35 env -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.116 08:42:35 env -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.116 08:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.116 ************************************ 00:04:13.116 START TEST env_mem_callbacks 00:04:13.116 ************************************ 00:04:13.116 08:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:13.116 EAL: Detected CPU lcores: 96 00:04:13.116 EAL: Detected NUMA nodes: 2 00:04:13.116 EAL: Detected shared linkage of DPDK 00:04:13.116 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:13.116 EAL: Selected IOVA mode 'VA' 00:04:13.116 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.116 EAL: VFIO support initialized 00:04:13.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:13.116 00:04:13.116 00:04:13.116 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.116 http://cunit.sourceforge.net/ 00:04:13.116 00:04:13.116 00:04:13.116 Suite: memory 00:04:13.116 Test: test ... 00:04:13.116 register 0x200000200000 2097152 00:04:13.116 malloc 3145728 00:04:13.116 register 0x200000400000 4194304 00:04:13.116 buf 0x200000500000 len 3145728 PASSED 00:04:13.116 malloc 64 00:04:13.116 buf 0x2000004fff40 len 64 PASSED 00:04:13.116 malloc 4194304 00:04:13.116 register 0x200000800000 6291456 00:04:13.116 buf 0x200000a00000 len 4194304 PASSED 00:04:13.116 free 0x200000500000 3145728 00:04:13.116 free 0x2000004fff40 64 00:04:13.116 unregister 0x200000400000 4194304 PASSED 00:04:13.116 free 0x200000a00000 4194304 00:04:13.116 unregister 0x200000800000 6291456 PASSED 00:04:13.116 malloc 8388608 00:04:13.116 register 0x200000400000 10485760 00:04:13.116 buf 0x200000600000 len 8388608 PASSED 00:04:13.116 free 0x200000600000 8388608 00:04:13.116 unregister 0x200000400000 10485760 PASSED 00:04:13.116 passed 00:04:13.116 00:04:13.116 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.116 suites 1 1 n/a 0 0 00:04:13.116 tests 1 1 1 0 0 00:04:13.116 asserts 15 15 15 0 n/a 00:04:13.116 00:04:13.116 Elapsed time = 0.005 seconds 00:04:13.116 00:04:13.116 real 0m0.054s 00:04:13.116 user 0m0.020s 00:04:13.116 sys 0m0.034s 00:04:13.116 08:42:35 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.116 08:42:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:13.116 ************************************ 00:04:13.116 END TEST env_mem_callbacks 00:04:13.116 ************************************ 00:04:13.116 00:04:13.116 real 0m6.062s 00:04:13.116 user 0m4.243s 00:04:13.116 sys 0m0.890s 00:04:13.116 08:42:35 env -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:13.116 08:42:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.116 ************************************ 00:04:13.116 END TEST env 00:04:13.116 ************************************ 00:04:13.116 08:42:35 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.116 08:42:35 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:13.116 08:42:35 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:13.116 08:42:35 -- common/autotest_common.sh@10 -- # set +x 00:04:13.374 ************************************ 00:04:13.374 START TEST rpc 00:04:13.374 ************************************ 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/rpc.sh 00:04:13.374 * Looking for test storage... 00:04:13.374 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:13.374 08:42:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:13.374 08:42:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1144959 00:04:13.374 08:42:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.374 08:42:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1144959 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@830 -- # '[' -z 1144959 ']' 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:13.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:13.374 08:42:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.374 [2024-06-09 08:42:35.818193] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:13.374 [2024-06-09 08:42:35.818246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144959 ] 00:04:13.374 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.374 [2024-06-09 08:42:35.872699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.632 [2024-06-09 08:42:35.953462] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:13.632 [2024-06-09 08:42:35.953494] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1144959' to capture a snapshot of events at runtime. 00:04:13.632 [2024-06-09 08:42:35.953501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:13.632 [2024-06-09 08:42:35.953507] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:13.632 [2024-06-09 08:42:35.953512] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1144959 for offline analysis/debug. 00:04:13.632 [2024-06-09 08:42:35.953530] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.197 08:42:36 rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:14.197 08:42:36 rpc -- common/autotest_common.sh@863 -- # return 0 00:04:14.197 08:42:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:14.197 08:42:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:14.197 08:42:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:14.197 08:42:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:14.197 08:42:36 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.197 08:42:36 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.197 08:42:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.197 ************************************ 00:04:14.197 START TEST rpc_integrity 00:04:14.197 ************************************ 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.197 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.197 { 00:04:14.197 "name": "Malloc0", 00:04:14.197 "aliases": [ 00:04:14.197 "e9b193fd-5d0f-4b1e-bf9f-056fd5974e59" 00:04:14.197 ], 00:04:14.197 "product_name": "Malloc disk", 00:04:14.197 "block_size": 512, 00:04:14.197 "num_blocks": 16384, 00:04:14.197 "uuid": "e9b193fd-5d0f-4b1e-bf9f-056fd5974e59", 00:04:14.197 "assigned_rate_limits": { 00:04:14.197 "rw_ios_per_sec": 0, 00:04:14.197 "rw_mbytes_per_sec": 0, 00:04:14.197 "r_mbytes_per_sec": 0, 00:04:14.197 "w_mbytes_per_sec": 0 00:04:14.197 }, 00:04:14.197 "claimed": false, 00:04:14.197 "zoned": false, 00:04:14.197 "supported_io_types": { 00:04:14.197 "read": true, 00:04:14.197 "write": true, 00:04:14.197 "unmap": true, 00:04:14.197 "write_zeroes": true, 00:04:14.197 "flush": true, 00:04:14.197 "reset": true, 00:04:14.197 "compare": false, 00:04:14.197 "compare_and_write": false, 00:04:14.197 "abort": true, 00:04:14.197 "nvme_admin": false, 00:04:14.197 "nvme_io": false 00:04:14.197 }, 00:04:14.197 "memory_domains": [ 00:04:14.197 { 00:04:14.197 "dma_device_id": "system", 00:04:14.197 "dma_device_type": 1 00:04:14.197 }, 00:04:14.197 { 00:04:14.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.197 "dma_device_type": 2 00:04:14.197 } 00:04:14.197 ], 00:04:14.197 "driver_specific": {} 00:04:14.197 } 00:04:14.197 ]' 00:04:14.197 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.455 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.455 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.455 [2024-06-09 08:42:36.771091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:14.455 [2024-06-09 08:42:36.771120] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.455 [2024-06-09 08:42:36.771131] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11df6b0 00:04:14.455 [2024-06-09 08:42:36.771137] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.455 [2024-06-09 08:42:36.772148] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.455 [2024-06-09 08:42:36.772169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.455 Passthru0 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.455 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.455 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.455 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.455 { 00:04:14.455 "name": "Malloc0", 00:04:14.455 "aliases": [ 00:04:14.455 "e9b193fd-5d0f-4b1e-bf9f-056fd5974e59" 00:04:14.455 ], 00:04:14.455 "product_name": "Malloc disk", 00:04:14.455 "block_size": 512, 00:04:14.455 "num_blocks": 16384, 00:04:14.455 "uuid": "e9b193fd-5d0f-4b1e-bf9f-056fd5974e59", 00:04:14.455 "assigned_rate_limits": { 00:04:14.455 "rw_ios_per_sec": 0, 00:04:14.455 "rw_mbytes_per_sec": 0, 00:04:14.455 "r_mbytes_per_sec": 0, 00:04:14.455 "w_mbytes_per_sec": 0 00:04:14.455 }, 00:04:14.455 "claimed": true, 00:04:14.455 "claim_type": "exclusive_write", 00:04:14.455 "zoned": false, 00:04:14.455 "supported_io_types": { 00:04:14.455 "read": true, 00:04:14.455 "write": true, 00:04:14.455 "unmap": true, 00:04:14.455 "write_zeroes": true, 00:04:14.455 "flush": true, 00:04:14.455 "reset": true, 00:04:14.455 "compare": false, 00:04:14.455 "compare_and_write": false, 00:04:14.455 "abort": true, 00:04:14.455 "nvme_admin": false, 00:04:14.455 "nvme_io": false 00:04:14.455 }, 00:04:14.455 "memory_domains": [ 00:04:14.455 { 00:04:14.455 "dma_device_id": "system", 00:04:14.455 "dma_device_type": 1 00:04:14.455 }, 00:04:14.455 { 00:04:14.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.455 "dma_device_type": 2 00:04:14.455 } 00:04:14.455 ], 00:04:14.455 "driver_specific": {} 00:04:14.455 }, 00:04:14.455 { 00:04:14.455 "name": "Passthru0", 00:04:14.455 "aliases": [ 00:04:14.455 "4f1ef9fb-c456-58ac-b6b7-0dea6c06d486" 00:04:14.455 ], 00:04:14.455 "product_name": "passthru", 00:04:14.455 "block_size": 512, 00:04:14.455 "num_blocks": 16384, 00:04:14.455 "uuid": "4f1ef9fb-c456-58ac-b6b7-0dea6c06d486", 00:04:14.455 "assigned_rate_limits": { 00:04:14.456 "rw_ios_per_sec": 0, 00:04:14.456 "rw_mbytes_per_sec": 0, 00:04:14.456 "r_mbytes_per_sec": 0, 00:04:14.456 "w_mbytes_per_sec": 0 00:04:14.456 }, 00:04:14.456 "claimed": false, 00:04:14.456 "zoned": false, 00:04:14.456 "supported_io_types": { 00:04:14.456 "read": true, 00:04:14.456 "write": true, 00:04:14.456 "unmap": true, 00:04:14.456 "write_zeroes": true, 00:04:14.456 "flush": true, 00:04:14.456 "reset": true, 00:04:14.456 "compare": false, 00:04:14.456 "compare_and_write": false, 00:04:14.456 "abort": true, 00:04:14.456 "nvme_admin": false, 00:04:14.456 "nvme_io": false 00:04:14.456 }, 00:04:14.456 "memory_domains": [ 00:04:14.456 { 00:04:14.456 "dma_device_id": "system", 00:04:14.456 "dma_device_type": 1 00:04:14.456 }, 00:04:14.456 { 00:04:14.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.456 "dma_device_type": 2 00:04:14.456 } 00:04:14.456 ], 00:04:14.456 "driver_specific": { 00:04:14.456 "passthru": { 00:04:14.456 "name": "Passthru0", 00:04:14.456 "base_bdev_name": "Malloc0" 00:04:14.456 } 00:04:14.456 } 00:04:14.456 } 00:04:14.456 ]' 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.456 08:42:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.456 00:04:14.456 real 0m0.264s 00:04:14.456 user 0m0.164s 00:04:14.456 sys 0m0.036s 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 ************************************ 00:04:14.456 END TEST rpc_integrity 00:04:14.456 ************************************ 00:04:14.456 08:42:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:14.456 08:42:36 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.456 08:42:36 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.456 08:42:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 ************************************ 00:04:14.456 START TEST rpc_plugins 00:04:14.456 ************************************ 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # rpc_plugins 00:04:14.456 08:42:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.456 08:42:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:14.456 08:42:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.456 08:42:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.456 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.456 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:14.456 { 00:04:14.456 "name": "Malloc1", 00:04:14.456 "aliases": [ 00:04:14.456 "208b7afc-e102-492d-a170-c1464af52230" 00:04:14.456 ], 00:04:14.456 "product_name": "Malloc disk", 00:04:14.456 "block_size": 4096, 00:04:14.456 "num_blocks": 256, 00:04:14.456 "uuid": "208b7afc-e102-492d-a170-c1464af52230", 00:04:14.456 "assigned_rate_limits": { 00:04:14.456 "rw_ios_per_sec": 0, 00:04:14.456 "rw_mbytes_per_sec": 0, 00:04:14.456 "r_mbytes_per_sec": 0, 00:04:14.456 "w_mbytes_per_sec": 0 00:04:14.456 }, 00:04:14.456 "claimed": false, 00:04:14.456 "zoned": false, 00:04:14.456 "supported_io_types": { 00:04:14.456 "read": true, 00:04:14.456 "write": true, 00:04:14.456 "unmap": true, 00:04:14.456 "write_zeroes": true, 00:04:14.456 "flush": true, 00:04:14.456 "reset": true, 00:04:14.456 "compare": false, 00:04:14.456 "compare_and_write": false, 00:04:14.456 "abort": true, 00:04:14.456 "nvme_admin": false, 00:04:14.456 "nvme_io": false 00:04:14.456 }, 00:04:14.456 "memory_domains": [ 00:04:14.456 { 00:04:14.456 "dma_device_id": "system", 00:04:14.456 "dma_device_type": 1 00:04:14.456 }, 00:04:14.456 { 00:04:14.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.456 "dma_device_type": 2 00:04:14.456 } 00:04:14.456 ], 00:04:14.456 "driver_specific": {} 00:04:14.456 } 00:04:14.456 ]' 00:04:14.456 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:14.713 08:42:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:14.713 00:04:14.713 real 0m0.136s 00:04:14.713 user 0m0.084s 00:04:14.713 sys 0m0.015s 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.713 08:42:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:14.713 ************************************ 00:04:14.713 END TEST rpc_plugins 00:04:14.713 ************************************ 00:04:14.713 08:42:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:14.713 08:42:37 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.714 08:42:37 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.714 08:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.714 ************************************ 00:04:14.714 START TEST rpc_trace_cmd_test 00:04:14.714 ************************************ 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # rpc_trace_cmd_test 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:14.714 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1144959", 00:04:14.714 "tpoint_group_mask": "0x8", 00:04:14.714 "iscsi_conn": { 00:04:14.714 "mask": "0x2", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "scsi": { 00:04:14.714 "mask": "0x4", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "bdev": { 00:04:14.714 "mask": "0x8", 00:04:14.714 "tpoint_mask": "0xffffffffffffffff" 00:04:14.714 }, 00:04:14.714 "nvmf_rdma": { 00:04:14.714 "mask": "0x10", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "nvmf_tcp": { 00:04:14.714 "mask": "0x20", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "ftl": { 00:04:14.714 "mask": "0x40", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "blobfs": { 00:04:14.714 "mask": "0x80", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "dsa": { 00:04:14.714 "mask": "0x200", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "thread": { 00:04:14.714 "mask": "0x400", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "nvme_pcie": { 00:04:14.714 "mask": "0x800", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "iaa": { 00:04:14.714 "mask": "0x1000", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "nvme_tcp": { 00:04:14.714 "mask": "0x2000", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "bdev_nvme": { 00:04:14.714 "mask": "0x4000", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 }, 00:04:14.714 "sock": { 00:04:14.714 "mask": "0x8000", 00:04:14.714 "tpoint_mask": "0x0" 00:04:14.714 } 00:04:14.714 }' 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:14.714 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:14.971 00:04:14.971 real 0m0.224s 00:04:14.971 user 0m0.188s 00:04:14.971 sys 0m0.029s 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:14.971 08:42:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:14.971 ************************************ 00:04:14.971 END TEST rpc_trace_cmd_test 00:04:14.971 ************************************ 00:04:14.971 08:42:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:14.971 08:42:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:14.971 08:42:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:14.971 08:42:37 rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:14.971 08:42:37 rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:14.971 08:42:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.971 ************************************ 00:04:14.971 START TEST rpc_daemon_integrity 00:04:14.971 ************************************ 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # rpc_integrity 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.971 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.229 { 00:04:15.229 "name": "Malloc2", 00:04:15.229 "aliases": [ 00:04:15.229 "baadbaee-175b-4c33-96a5-63bc9330ba40" 00:04:15.229 ], 00:04:15.229 "product_name": "Malloc disk", 00:04:15.229 "block_size": 512, 00:04:15.229 "num_blocks": 16384, 00:04:15.229 "uuid": "baadbaee-175b-4c33-96a5-63bc9330ba40", 00:04:15.229 "assigned_rate_limits": { 00:04:15.229 "rw_ios_per_sec": 0, 00:04:15.229 "rw_mbytes_per_sec": 0, 00:04:15.229 "r_mbytes_per_sec": 0, 00:04:15.229 "w_mbytes_per_sec": 0 00:04:15.229 }, 00:04:15.229 "claimed": false, 00:04:15.229 "zoned": false, 00:04:15.229 "supported_io_types": { 00:04:15.229 "read": true, 00:04:15.229 "write": true, 00:04:15.229 "unmap": true, 00:04:15.229 "write_zeroes": true, 00:04:15.229 "flush": true, 00:04:15.229 "reset": true, 00:04:15.229 "compare": false, 00:04:15.229 "compare_and_write": false, 00:04:15.229 "abort": true, 00:04:15.229 "nvme_admin": false, 00:04:15.229 "nvme_io": false 00:04:15.229 }, 00:04:15.229 "memory_domains": [ 00:04:15.229 { 00:04:15.229 "dma_device_id": "system", 00:04:15.229 "dma_device_type": 1 00:04:15.229 }, 00:04:15.229 { 00:04:15.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.229 "dma_device_type": 2 00:04:15.229 } 00:04:15.229 ], 00:04:15.229 "driver_specific": {} 00:04:15.229 } 00:04:15.229 ]' 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.229 [2024-06-09 08:42:37.593356] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:15.229 [2024-06-09 08:42:37.593384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.229 [2024-06-09 08:42:37.593396] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11deea0 00:04:15.229 [2024-06-09 08:42:37.593402] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.229 [2024-06-09 08:42:37.594324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.229 [2024-06-09 08:42:37.594343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.229 Passthru0 00:04:15.229 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.230 { 00:04:15.230 "name": "Malloc2", 00:04:15.230 "aliases": [ 00:04:15.230 "baadbaee-175b-4c33-96a5-63bc9330ba40" 00:04:15.230 ], 00:04:15.230 "product_name": "Malloc disk", 00:04:15.230 "block_size": 512, 00:04:15.230 "num_blocks": 16384, 00:04:15.230 "uuid": "baadbaee-175b-4c33-96a5-63bc9330ba40", 00:04:15.230 "assigned_rate_limits": { 00:04:15.230 "rw_ios_per_sec": 0, 00:04:15.230 "rw_mbytes_per_sec": 0, 00:04:15.230 "r_mbytes_per_sec": 0, 00:04:15.230 "w_mbytes_per_sec": 0 00:04:15.230 }, 00:04:15.230 "claimed": true, 00:04:15.230 "claim_type": "exclusive_write", 00:04:15.230 "zoned": false, 00:04:15.230 "supported_io_types": { 00:04:15.230 "read": true, 00:04:15.230 "write": true, 00:04:15.230 "unmap": true, 00:04:15.230 "write_zeroes": true, 00:04:15.230 "flush": true, 00:04:15.230 "reset": true, 00:04:15.230 "compare": false, 00:04:15.230 "compare_and_write": false, 00:04:15.230 "abort": true, 00:04:15.230 "nvme_admin": false, 00:04:15.230 "nvme_io": false 00:04:15.230 }, 00:04:15.230 "memory_domains": [ 00:04:15.230 { 00:04:15.230 "dma_device_id": "system", 00:04:15.230 "dma_device_type": 1 00:04:15.230 }, 00:04:15.230 { 00:04:15.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.230 "dma_device_type": 2 00:04:15.230 } 00:04:15.230 ], 00:04:15.230 "driver_specific": {} 00:04:15.230 }, 00:04:15.230 { 00:04:15.230 "name": "Passthru0", 00:04:15.230 "aliases": [ 00:04:15.230 "1d4dbe7a-8f1c-5ba4-93fd-147a03b0f84a" 00:04:15.230 ], 00:04:15.230 "product_name": "passthru", 00:04:15.230 "block_size": 512, 00:04:15.230 "num_blocks": 16384, 00:04:15.230 "uuid": "1d4dbe7a-8f1c-5ba4-93fd-147a03b0f84a", 00:04:15.230 "assigned_rate_limits": { 00:04:15.230 "rw_ios_per_sec": 0, 00:04:15.230 "rw_mbytes_per_sec": 0, 00:04:15.230 "r_mbytes_per_sec": 0, 00:04:15.230 "w_mbytes_per_sec": 0 00:04:15.230 }, 00:04:15.230 "claimed": false, 00:04:15.230 "zoned": false, 00:04:15.230 "supported_io_types": { 00:04:15.230 "read": true, 00:04:15.230 "write": true, 00:04:15.230 "unmap": true, 00:04:15.230 "write_zeroes": true, 00:04:15.230 "flush": true, 00:04:15.230 "reset": true, 00:04:15.230 "compare": false, 00:04:15.230 "compare_and_write": false, 00:04:15.230 "abort": true, 00:04:15.230 "nvme_admin": false, 00:04:15.230 "nvme_io": false 00:04:15.230 }, 00:04:15.230 "memory_domains": [ 00:04:15.230 { 00:04:15.230 "dma_device_id": "system", 00:04:15.230 "dma_device_type": 1 00:04:15.230 }, 00:04:15.230 { 00:04:15.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.230 "dma_device_type": 2 00:04:15.230 } 00:04:15.230 ], 00:04:15.230 "driver_specific": { 00:04:15.230 "passthru": { 00:04:15.230 "name": "Passthru0", 00:04:15.230 "base_bdev_name": "Malloc2" 00:04:15.230 } 00:04:15.230 } 00:04:15.230 } 00:04:15.230 ]' 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.230 00:04:15.230 real 0m0.259s 00:04:15.230 user 0m0.168s 00:04:15.230 sys 0m0.029s 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.230 08:42:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.230 ************************************ 00:04:15.230 END TEST rpc_daemon_integrity 00:04:15.230 ************************************ 00:04:15.230 08:42:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:15.230 08:42:37 rpc -- rpc/rpc.sh@84 -- # killprocess 1144959 00:04:15.230 08:42:37 rpc -- common/autotest_common.sh@949 -- # '[' -z 1144959 ']' 00:04:15.230 08:42:37 rpc -- common/autotest_common.sh@953 -- # kill -0 1144959 00:04:15.230 08:42:37 rpc -- common/autotest_common.sh@954 -- # uname 00:04:15.230 08:42:37 rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:15.230 08:42:37 rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1144959 00:04:15.487 08:42:37 rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:15.487 08:42:37 rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:15.487 08:42:37 rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1144959' 00:04:15.487 killing process with pid 1144959 00:04:15.487 08:42:37 rpc -- common/autotest_common.sh@968 -- # kill 1144959 00:04:15.487 08:42:37 rpc -- common/autotest_common.sh@973 -- # wait 1144959 00:04:15.744 00:04:15.744 real 0m2.413s 00:04:15.744 user 0m3.115s 00:04:15.744 sys 0m0.646s 00:04:15.744 08:42:38 rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:15.744 08:42:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.744 ************************************ 00:04:15.744 END TEST rpc 00:04:15.744 ************************************ 00:04:15.744 08:42:38 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.744 08:42:38 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.744 08:42:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.744 08:42:38 -- common/autotest_common.sh@10 -- # set +x 00:04:15.744 ************************************ 00:04:15.744 START TEST skip_rpc 00:04:15.744 ************************************ 00:04:15.744 08:42:38 skip_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:15.744 * Looking for test storage... 00:04:15.744 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc 00:04:15.744 08:42:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:15.744 08:42:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:15.744 08:42:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:15.744 08:42:38 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:15.744 08:42:38 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:15.744 08:42:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.744 ************************************ 00:04:15.744 START TEST skip_rpc 00:04:15.744 ************************************ 00:04:15.744 08:42:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # test_skip_rpc 00:04:15.744 08:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1145576 00:04:15.744 08:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.744 08:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:15.744 08:42:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:16.002 [2024-06-09 08:42:38.346910] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:16.002 [2024-06-09 08:42:38.346949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1145576 ] 00:04:16.002 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.002 [2024-06-09 08:42:38.399664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.002 [2024-06-09 08:42:38.470080] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1145576 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@949 -- # '[' -z 1145576 ']' 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # kill -0 1145576 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # uname 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1145576 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1145576' 00:04:21.260 killing process with pid 1145576 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # kill 1145576 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # wait 1145576 00:04:21.260 00:04:21.260 real 0m5.361s 00:04:21.260 user 0m5.145s 00:04:21.260 sys 0m0.240s 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:21.260 08:42:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.260 ************************************ 00:04:21.260 END TEST skip_rpc 00:04:21.260 ************************************ 00:04:21.260 08:42:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:21.260 08:42:43 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:21.260 08:42:43 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:21.260 08:42:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.260 ************************************ 00:04:21.260 START TEST skip_rpc_with_json 00:04:21.260 ************************************ 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_json 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1146516 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1146516 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # '[' -z 1146516 ']' 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:21.260 08:42:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.260 [2024-06-09 08:42:43.767630] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:21.260 [2024-06-09 08:42:43.767666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146516 ] 00:04:21.260 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.518 [2024-06-09 08:42:43.821223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.518 [2024-06-09 08:42:43.890464] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@863 -- # return 0 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.082 [2024-06-09 08:42:44.560049] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:22.082 request: 00:04:22.082 { 00:04:22.082 "trtype": "tcp", 00:04:22.082 "method": "nvmf_get_transports", 00:04:22.082 "req_id": 1 00:04:22.082 } 00:04:22.082 Got JSON-RPC error response 00:04:22.082 response: 00:04:22.082 { 00:04:22.082 "code": -19, 00:04:22.082 "message": "No such device" 00:04:22.082 } 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.082 [2024-06-09 08:42:44.572137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:22.082 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.340 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:22.340 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:22.340 { 00:04:22.340 "subsystems": [ 00:04:22.340 { 00:04:22.340 "subsystem": "keyring", 00:04:22.340 "config": [] 00:04:22.340 }, 00:04:22.340 { 00:04:22.340 "subsystem": "iobuf", 00:04:22.340 "config": [ 00:04:22.340 { 00:04:22.340 "method": "iobuf_set_options", 00:04:22.340 "params": { 00:04:22.340 "small_pool_count": 8192, 00:04:22.340 "large_pool_count": 1024, 00:04:22.340 "small_bufsize": 8192, 00:04:22.340 "large_bufsize": 135168 00:04:22.340 } 00:04:22.340 } 00:04:22.340 ] 00:04:22.340 }, 00:04:22.340 { 00:04:22.340 "subsystem": "sock", 00:04:22.340 "config": [ 00:04:22.340 { 00:04:22.340 "method": "sock_set_default_impl", 00:04:22.340 "params": { 00:04:22.340 "impl_name": "posix" 00:04:22.340 } 00:04:22.340 }, 00:04:22.340 { 00:04:22.340 "method": "sock_impl_set_options", 00:04:22.340 "params": { 00:04:22.340 "impl_name": "ssl", 00:04:22.341 "recv_buf_size": 4096, 00:04:22.341 "send_buf_size": 4096, 00:04:22.341 "enable_recv_pipe": true, 00:04:22.341 "enable_quickack": false, 00:04:22.341 "enable_placement_id": 0, 00:04:22.341 "enable_zerocopy_send_server": true, 00:04:22.341 "enable_zerocopy_send_client": false, 00:04:22.341 "zerocopy_threshold": 0, 00:04:22.341 "tls_version": 0, 00:04:22.341 "enable_ktls": false 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "sock_impl_set_options", 00:04:22.341 "params": { 00:04:22.341 "impl_name": "posix", 00:04:22.341 "recv_buf_size": 2097152, 00:04:22.341 "send_buf_size": 2097152, 00:04:22.341 "enable_recv_pipe": true, 00:04:22.341 "enable_quickack": false, 00:04:22.341 "enable_placement_id": 0, 00:04:22.341 "enable_zerocopy_send_server": true, 00:04:22.341 "enable_zerocopy_send_client": false, 00:04:22.341 "zerocopy_threshold": 0, 00:04:22.341 "tls_version": 0, 00:04:22.341 "enable_ktls": false 00:04:22.341 } 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "vmd", 00:04:22.341 "config": [] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "accel", 00:04:22.341 "config": [ 00:04:22.341 { 00:04:22.341 "method": "accel_set_options", 00:04:22.341 "params": { 00:04:22.341 "small_cache_size": 128, 00:04:22.341 "large_cache_size": 16, 00:04:22.341 "task_count": 2048, 00:04:22.341 "sequence_count": 2048, 00:04:22.341 "buf_count": 2048 00:04:22.341 } 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "bdev", 00:04:22.341 "config": [ 00:04:22.341 { 00:04:22.341 "method": "bdev_set_options", 00:04:22.341 "params": { 00:04:22.341 "bdev_io_pool_size": 65535, 00:04:22.341 "bdev_io_cache_size": 256, 00:04:22.341 "bdev_auto_examine": true, 00:04:22.341 "iobuf_small_cache_size": 128, 00:04:22.341 "iobuf_large_cache_size": 16 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "bdev_raid_set_options", 00:04:22.341 "params": { 00:04:22.341 "process_window_size_kb": 1024 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "bdev_iscsi_set_options", 00:04:22.341 "params": { 00:04:22.341 "timeout_sec": 30 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "bdev_nvme_set_options", 00:04:22.341 "params": { 00:04:22.341 "action_on_timeout": "none", 00:04:22.341 "timeout_us": 0, 00:04:22.341 "timeout_admin_us": 0, 00:04:22.341 "keep_alive_timeout_ms": 10000, 00:04:22.341 "arbitration_burst": 0, 00:04:22.341 "low_priority_weight": 0, 00:04:22.341 "medium_priority_weight": 0, 00:04:22.341 "high_priority_weight": 0, 00:04:22.341 "nvme_adminq_poll_period_us": 10000, 00:04:22.341 "nvme_ioq_poll_period_us": 0, 00:04:22.341 "io_queue_requests": 0, 00:04:22.341 "delay_cmd_submit": true, 00:04:22.341 "transport_retry_count": 4, 00:04:22.341 "bdev_retry_count": 3, 00:04:22.341 "transport_ack_timeout": 0, 00:04:22.341 "ctrlr_loss_timeout_sec": 0, 00:04:22.341 "reconnect_delay_sec": 0, 00:04:22.341 "fast_io_fail_timeout_sec": 0, 00:04:22.341 "disable_auto_failback": false, 00:04:22.341 "generate_uuids": false, 00:04:22.341 "transport_tos": 0, 00:04:22.341 "nvme_error_stat": false, 00:04:22.341 "rdma_srq_size": 0, 00:04:22.341 "io_path_stat": false, 00:04:22.341 "allow_accel_sequence": false, 00:04:22.341 "rdma_max_cq_size": 0, 00:04:22.341 "rdma_cm_event_timeout_ms": 0, 00:04:22.341 "dhchap_digests": [ 00:04:22.341 "sha256", 00:04:22.341 "sha384", 00:04:22.341 "sha512" 00:04:22.341 ], 00:04:22.341 "dhchap_dhgroups": [ 00:04:22.341 "null", 00:04:22.341 "ffdhe2048", 00:04:22.341 "ffdhe3072", 00:04:22.341 "ffdhe4096", 00:04:22.341 "ffdhe6144", 00:04:22.341 "ffdhe8192" 00:04:22.341 ] 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "bdev_nvme_set_hotplug", 00:04:22.341 "params": { 00:04:22.341 "period_us": 100000, 00:04:22.341 "enable": false 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "bdev_wait_for_examine" 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "scsi", 00:04:22.341 "config": null 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "scheduler", 00:04:22.341 "config": [ 00:04:22.341 { 00:04:22.341 "method": "framework_set_scheduler", 00:04:22.341 "params": { 00:04:22.341 "name": "static" 00:04:22.341 } 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "vhost_scsi", 00:04:22.341 "config": [] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "vhost_blk", 00:04:22.341 "config": [] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "ublk", 00:04:22.341 "config": [] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "nbd", 00:04:22.341 "config": [] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "nvmf", 00:04:22.341 "config": [ 00:04:22.341 { 00:04:22.341 "method": "nvmf_set_config", 00:04:22.341 "params": { 00:04:22.341 "discovery_filter": "match_any", 00:04:22.341 "admin_cmd_passthru": { 00:04:22.341 "identify_ctrlr": false 00:04:22.341 } 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "nvmf_set_max_subsystems", 00:04:22.341 "params": { 00:04:22.341 "max_subsystems": 1024 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "nvmf_set_crdt", 00:04:22.341 "params": { 00:04:22.341 "crdt1": 0, 00:04:22.341 "crdt2": 0, 00:04:22.341 "crdt3": 0 00:04:22.341 } 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "method": "nvmf_create_transport", 00:04:22.341 "params": { 00:04:22.341 "trtype": "TCP", 00:04:22.341 "max_queue_depth": 128, 00:04:22.341 "max_io_qpairs_per_ctrlr": 127, 00:04:22.341 "in_capsule_data_size": 4096, 00:04:22.341 "max_io_size": 131072, 00:04:22.341 "io_unit_size": 131072, 00:04:22.341 "max_aq_depth": 128, 00:04:22.341 "num_shared_buffers": 511, 00:04:22.341 "buf_cache_size": 4294967295, 00:04:22.341 "dif_insert_or_strip": false, 00:04:22.341 "zcopy": false, 00:04:22.341 "c2h_success": true, 00:04:22.341 "sock_priority": 0, 00:04:22.341 "abort_timeout_sec": 1, 00:04:22.341 "ack_timeout": 0, 00:04:22.341 "data_wr_pool_size": 0 00:04:22.341 } 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 }, 00:04:22.341 { 00:04:22.341 "subsystem": "iscsi", 00:04:22.341 "config": [ 00:04:22.341 { 00:04:22.341 "method": "iscsi_set_options", 00:04:22.341 "params": { 00:04:22.341 "node_base": "iqn.2016-06.io.spdk", 00:04:22.341 "max_sessions": 128, 00:04:22.341 "max_connections_per_session": 2, 00:04:22.341 "max_queue_depth": 64, 00:04:22.341 "default_time2wait": 2, 00:04:22.341 "default_time2retain": 20, 00:04:22.341 "first_burst_length": 8192, 00:04:22.341 "immediate_data": true, 00:04:22.341 "allow_duplicated_isid": false, 00:04:22.341 "error_recovery_level": 0, 00:04:22.341 "nop_timeout": 60, 00:04:22.341 "nop_in_interval": 30, 00:04:22.341 "disable_chap": false, 00:04:22.341 "require_chap": false, 00:04:22.341 "mutual_chap": false, 00:04:22.341 "chap_group": 0, 00:04:22.341 "max_large_datain_per_connection": 64, 00:04:22.341 "max_r2t_per_connection": 4, 00:04:22.341 "pdu_pool_size": 36864, 00:04:22.341 "immediate_data_pool_size": 16384, 00:04:22.341 "data_out_pool_size": 2048 00:04:22.341 } 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 } 00:04:22.341 ] 00:04:22.341 } 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1146516 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1146516 ']' 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1146516 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1146516 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1146516' 00:04:22.341 killing process with pid 1146516 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1146516 00:04:22.341 08:42:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1146516 00:04:22.599 08:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1146751 00:04:22.599 08:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:22.599 08:42:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@949 -- # '[' -z 1146751 ']' 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # kill -0 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # uname 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1146751' 00:04:27.856 killing process with pid 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # kill 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # wait 1146751 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:27.856 08:42:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/log.txt 00:04:28.114 00:04:28.114 real 0m6.690s 00:04:28.114 user 0m6.532s 00:04:28.114 sys 0m0.548s 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:28.114 ************************************ 00:04:28.114 END TEST skip_rpc_with_json 00:04:28.114 ************************************ 00:04:28.114 08:42:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.114 ************************************ 00:04:28.114 START TEST skip_rpc_with_delay 00:04:28.114 ************************************ 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # test_skip_rpc_with_delay 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:28.114 [2024-06-09 08:42:50.533942] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:28.114 [2024-06-09 08:42:50.533997] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:28.114 00:04:28.114 real 0m0.064s 00:04:28.114 user 0m0.044s 00:04:28.114 sys 0m0.019s 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:28.114 08:42:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:28.114 ************************************ 00:04:28.114 END TEST skip_rpc_with_delay 00:04:28.114 ************************************ 00:04:28.114 08:42:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:28.114 08:42:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:28.114 08:42:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:28.114 08:42:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.114 ************************************ 00:04:28.114 START TEST exit_on_failed_rpc_init 00:04:28.114 ************************************ 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # test_exit_on_failed_rpc_init 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1147710 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1147710 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # '[' -z 1147710 ']' 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:28.114 08:42:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.114 [2024-06-09 08:42:50.657474] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:28.114 [2024-06-09 08:42:50.657514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147710 ] 00:04:28.372 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.372 [2024-06-09 08:42:50.711739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.372 [2024-06-09 08:42:50.789716] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@863 -- # return 0 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:28.937 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:28.937 [2024-06-09 08:42:51.488864] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:28.937 [2024-06-09 08:42:51.488910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147939 ] 00:04:29.195 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.195 [2024-06-09 08:42:51.541777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.195 [2024-06-09 08:42:51.611974] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.195 [2024-06-09 08:42:51.612038] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.195 [2024-06-09 08:42:51.612046] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.195 [2024-06-09 08:42:51.612052] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1147710 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@949 -- # '[' -z 1147710 ']' 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # kill -0 1147710 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # uname 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1147710 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1147710' 00:04:29.195 killing process with pid 1147710 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # kill 1147710 00:04:29.195 08:42:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # wait 1147710 00:04:29.760 00:04:29.760 real 0m1.416s 00:04:29.760 user 0m1.591s 00:04:29.760 sys 0m0.403s 00:04:29.760 08:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.760 08:42:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 END TEST exit_on_failed_rpc_init 00:04:29.760 ************************************ 00:04:29.760 08:42:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc/config.json 00:04:29.760 00:04:29.760 real 0m13.891s 00:04:29.760 user 0m13.450s 00:04:29.760 sys 0m1.459s 00:04:29.760 08:42:52 skip_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.760 08:42:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 END TEST skip_rpc 00:04:29.760 ************************************ 00:04:29.760 08:42:52 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.760 08:42:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.760 08:42:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.760 08:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 START TEST rpc_client 00:04:29.760 ************************************ 00:04:29.760 08:42:52 rpc_client -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:29.760 * Looking for test storage... 00:04:29.760 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client 00:04:29.760 08:42:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:29.760 OK 00:04:29.760 08:42:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:29.760 00:04:29.760 real 0m0.108s 00:04:29.760 user 0m0.049s 00:04:29.760 sys 0m0.067s 00:04:29.760 08:42:52 rpc_client -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:29.760 08:42:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 END TEST rpc_client 00:04:29.760 ************************************ 00:04:29.760 08:42:52 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:29.760 08:42:52 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:29.760 08:42:52 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:29.760 08:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:29.760 ************************************ 00:04:29.760 START TEST json_config 00:04:29.760 ************************************ 00:04:29.760 08:42:52 json_config -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:30.019 08:42:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.019 08:42:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.019 08:42:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.019 08:42:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.019 08:42:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.019 08:42:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.019 08:42:52 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.019 08:42:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@47 -- # : 0 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:30.019 08:42:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json') 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:30.019 INFO: JSON configuration test init 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.019 08:42:52 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.019 08:42:52 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.019 08:42:52 json_config -- json_config/common.sh@10 -- # shift 00:04:30.019 08:42:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.019 08:42:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.019 08:42:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.019 08:42:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.019 08:42:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.019 08:42:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1148268 00:04:30.019 08:42:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.019 Waiting for target to run... 00:04:30.019 08:42:52 json_config -- json_config/common.sh@25 -- # waitforlisten 1148268 /var/tmp/spdk_tgt.sock 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@830 -- # '[' -z 1148268 ']' 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.019 08:42:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.019 08:42:52 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:30.020 08:42:52 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.020 08:42:52 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:30.020 08:42:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.020 [2024-06-09 08:42:52.456198] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:30.020 [2024-06-09 08:42:52.456244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148268 ] 00:04:30.020 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.585 [2024-06-09 08:42:52.887281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.585 [2024-06-09 08:42:52.974022] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@863 -- # return 0 00:04:30.842 08:42:53 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.842 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:30.842 08:42:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:30.842 08:42:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:30.842 08:42:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:34.196 08:42:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:04:34.196 08:42:56 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:04:34.196 08:42:56 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:04:34.196 08:42:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@296 -- # e810=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@297 -- # x722=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@298 -- # mlx=() 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:04:39.462 Found 0000:af:00.0 (0x8086 - 0x159b) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:04:39.462 Found 0000:af:00.1 (0x8086 - 0x159b) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@375 -- # (( 0 != 1 )) 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@375 -- # modprobe -r irdma 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@377 -- # modinfo irdma 00:04:39.462 08:43:01 json_config -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:04:39.721 Found net devices under 0000:af:00.0: cvl_0_0 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:04:39.721 Found net devices under 0000:af:00.1: cvl_0_1 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@58 -- # uname 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@104 -- # echo cvl_0_0 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@105 -- # continue 2 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@104 -- # echo cvl_0_1 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@105 -- # continue 2 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:04:39.721 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:39.721 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:04:39.721 altname enp175s0f0np0 00:04:39.721 altname ens801f0np0 00:04:39.721 inet 192.168.100.8/24 scope global cvl_0_0 00:04:39.721 valid_lft forever preferred_lft forever 00:04:39.721 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:04:39.721 valid_lft forever preferred_lft forever 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:04:39.721 08:43:02 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:04:39.722 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:04:39.722 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:04:39.722 altname enp175s0f1np1 00:04:39.722 altname ens801f1np1 00:04:39.722 inet 192.168.100.9/24 scope global cvl_0_1 00:04:39.722 valid_lft forever preferred_lft forever 00:04:39.722 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:04:39.722 valid_lft forever preferred_lft forever 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@422 -- # return 0 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@104 -- # echo cvl_0_0 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@105 -- # continue 2 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@104 -- # echo cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@105 -- # continue 2 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:04:39.722 192.168.100.9' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:04:39.722 192.168.100.9' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@457 -- # head -n 1 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:04:39.722 192.168.100.9' 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:04:39.722 08:43:02 json_config -- nvmf/common.sh@458 -- # head -n 1 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:04:39.980 08:43:02 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:04:39.980 08:43:02 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:04:39.980 08:43:02 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.980 08:43:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:39.980 MallocForNvmf0 00:04:39.980 08:43:02 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:39.980 08:43:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:40.238 MallocForNvmf1 00:04:40.238 08:43:02 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:40.238 08:43:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:40.238 [2024-06-09 08:43:02.796609] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:40.497 [2024-06-09 08:43:02.810712] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1f74990/0x1f73fd0) succeed. 00:04:40.497 [2024-06-09 08:43:02.820268] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1f76ad0/0x1f74550) succeed. 00:04:40.497 [2024-06-09 08:43:02.820290] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:04:40.497 [2024-06-09 08:43:02.822371] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:04:40.497 [2024-06-09 08:43:02.822384] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:40.497 [2024-06-09 08:43:02.824045] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:04:40.497 08:43:02 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:40.497 08:43:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:40.497 08:43:03 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.497 08:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:40.754 08:43:03 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:40.754 08:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:41.011 08:43:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:41.011 08:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:41.011 [2024-06-09 08:43:03.522162] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:41.011 08:43:03 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:41.011 08:43:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:41.011 08:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.269 08:43:03 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:41.269 08:43:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:41.269 08:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.269 08:43:03 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:41.269 08:43:03 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.269 08:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:41.269 MallocBdevForConfigChangeCheck 00:04:41.269 08:43:03 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:41.269 08:43:03 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:41.269 08:43:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.269 08:43:03 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:41.269 08:43:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.835 08:43:04 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:41.835 INFO: shutting down applications... 00:04:41.835 08:43:04 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:41.835 08:43:04 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:41.835 08:43:04 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:41.835 08:43:04 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:43.209 Calling clear_iscsi_subsystem 00:04:43.209 Calling clear_nvmf_subsystem 00:04:43.209 Calling clear_nbd_subsystem 00:04:43.209 Calling clear_ublk_subsystem 00:04:43.209 Calling clear_vhost_blk_subsystem 00:04:43.209 Calling clear_vhost_scsi_subsystem 00:04:43.209 Calling clear_bdev_subsystem 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.209 08:43:05 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:43.467 08:43:06 json_config -- json_config/json_config.sh@345 -- # break 00:04:43.467 08:43:06 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:43.467 08:43:06 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:43.467 08:43:06 json_config -- json_config/common.sh@31 -- # local app=target 00:04:43.467 08:43:06 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.467 08:43:06 json_config -- json_config/common.sh@35 -- # [[ -n 1148268 ]] 00:04:43.467 08:43:06 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1148268 00:04:43.467 08:43:06 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.467 08:43:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.467 08:43:06 json_config -- json_config/common.sh@41 -- # kill -0 1148268 00:04:43.467 08:43:06 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.035 08:43:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.035 08:43:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.035 08:43:06 json_config -- json_config/common.sh@41 -- # kill -0 1148268 00:04:44.035 08:43:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:44.035 08:43:06 json_config -- json_config/common.sh@43 -- # break 00:04:44.035 08:43:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:44.035 08:43:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:44.035 SPDK target shutdown done 00:04:44.035 08:43:06 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:44.035 INFO: relaunching applications... 00:04:44.035 08:43:06 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.035 08:43:06 json_config -- json_config/common.sh@9 -- # local app=target 00:04:44.035 08:43:06 json_config -- json_config/common.sh@10 -- # shift 00:04:44.035 08:43:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.035 08:43:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.035 08:43:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.035 08:43:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.035 08:43:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.035 08:43:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1152758 00:04:44.035 08:43:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.035 Waiting for target to run... 00:04:44.035 08:43:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.035 08:43:06 json_config -- json_config/common.sh@25 -- # waitforlisten 1152758 /var/tmp/spdk_tgt.sock 00:04:44.035 08:43:06 json_config -- common/autotest_common.sh@830 -- # '[' -z 1152758 ']' 00:04:44.036 08:43:06 json_config -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.036 08:43:06 json_config -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:44.036 08:43:06 json_config -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.036 08:43:06 json_config -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:44.036 08:43:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.036 [2024-06-09 08:43:06.571037] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:44.036 [2024-06-09 08:43:06.571090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1152758 ] 00:04:44.036 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.602 [2024-06-09 08:43:07.003024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.602 [2024-06-09 08:43:07.088192] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.885 [2024-06-09 08:43:10.099463] transport.c: 288:nvmf_transport_create: *WARNING*: The num_shared_buffers value (4095) is larger than the available iobuf pool size (1024). Please increase the iobuf pool sizes. 00:04:47.885 [2024-06-09 08:43:10.114502] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xeed9e0/0xeed020) succeed. 00:04:47.885 [2024-06-09 08:43:10.124102] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xeefb20/0xeed5a0) succeed. 00:04:47.885 [2024-06-09 08:43:10.126207] iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'nvmf_RDMA' iobuf large buffer cache at 1024/3071 entries. You may need to increase spdk_iobuf_opts.large_pool_count (1024) 00:04:47.885 [2024-06-09 08:43:10.126222] iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:47.885 [2024-06-09 08:43:10.127845] transport.c: 629:nvmf_transport_poll_group_create: *ERROR*: Unable to reserve the full number of buffers for the pg buffer cache. 00:04:47.885 [2024-06-09 08:43:10.156043] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:48.142 08:43:10 json_config -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:48.142 08:43:10 json_config -- common/autotest_common.sh@863 -- # return 0 00:04:48.142 08:43:10 json_config -- json_config/common.sh@26 -- # echo '' 00:04:48.142 00:04:48.399 08:43:10 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:48.399 08:43:10 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:48.399 INFO: Checking if target configuration is the same... 00:04:48.399 08:43:10 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:48.399 08:43:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.399 08:43:10 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.399 + '[' 2 -ne 2 ']' 00:04:48.399 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.399 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:04:48.399 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:04:48.399 +++ basename /dev/fd/62 00:04:48.399 ++ mktemp /tmp/62.XXX 00:04:48.399 + tmp_file_1=/tmp/62.eqe 00:04:48.399 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.399 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.399 + tmp_file_2=/tmp/spdk_tgt_config.json.LDg 00:04:48.399 + ret=0 00:04:48.399 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.656 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:48.656 + diff -u /tmp/62.eqe /tmp/spdk_tgt_config.json.LDg 00:04:48.656 + echo 'INFO: JSON config files are the same' 00:04:48.656 INFO: JSON config files are the same 00:04:48.656 + rm /tmp/62.eqe /tmp/spdk_tgt_config.json.LDg 00:04:48.656 + exit 0 00:04:48.656 08:43:11 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:48.656 08:43:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:48.656 INFO: changing configuration and checking if this can be detected... 00:04:48.656 08:43:11 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.656 08:43:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:48.914 08:43:11 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.914 08:43:11 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:48.914 08:43:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.914 + '[' 2 -ne 2 ']' 00:04:48.914 +++ dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:48.914 ++ readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/../.. 00:04:48.914 + rootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:04:48.914 +++ basename /dev/fd/62 00:04:48.914 ++ mktemp /tmp/62.XXX 00:04:48.914 + tmp_file_1=/tmp/62.Nl9 00:04:48.914 +++ basename /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.914 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:48.914 + tmp_file_2=/tmp/spdk_tgt_config.json.lAs 00:04:48.914 + ret=0 00:04:48.914 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.171 + /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:49.171 + diff -u /tmp/62.Nl9 /tmp/spdk_tgt_config.json.lAs 00:04:49.171 + ret=1 00:04:49.171 + echo '=== Start of file: /tmp/62.Nl9 ===' 00:04:49.171 + cat /tmp/62.Nl9 00:04:49.171 + echo '=== End of file: /tmp/62.Nl9 ===' 00:04:49.171 + echo '' 00:04:49.171 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lAs ===' 00:04:49.171 + cat /tmp/spdk_tgt_config.json.lAs 00:04:49.171 + echo '=== End of file: /tmp/spdk_tgt_config.json.lAs ===' 00:04:49.171 + echo '' 00:04:49.171 + rm /tmp/62.Nl9 /tmp/spdk_tgt_config.json.lAs 00:04:49.171 + exit 1 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:49.171 INFO: configuration change detected. 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:49.171 08:43:11 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:49.171 08:43:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@317 -- # [[ -n 1152758 ]] 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:49.171 08:43:11 json_config -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:49.171 08:43:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:49.171 08:43:11 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:49.172 08:43:11 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.172 08:43:11 json_config -- json_config/json_config.sh@323 -- # killprocess 1152758 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@949 -- # '[' -z 1152758 ']' 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@953 -- # kill -0 1152758 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@954 -- # uname 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1152758 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1152758' 00:04:49.172 killing process with pid 1152758 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@968 -- # kill 1152758 00:04:49.172 08:43:11 json_config -- common/autotest_common.sh@973 -- # wait 1152758 00:04:51.068 08:43:13 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.068 08:43:13 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:51.068 08:43:13 json_config -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:51.068 08:43:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.068 08:43:13 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:51.068 08:43:13 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:51.068 INFO: Success 00:04:51.068 08:43:13 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@117 -- # sync 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:04:51.068 08:43:13 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:04:51.068 00:04:51.068 real 0m20.901s 00:04:51.068 user 0m22.836s 00:04:51.068 sys 0m6.423s 00:04:51.068 08:43:13 json_config -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:51.068 08:43:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.068 ************************************ 00:04:51.068 END TEST json_config 00:04:51.068 ************************************ 00:04:51.068 08:43:13 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.068 08:43:13 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:51.068 08:43:13 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:51.068 08:43:13 -- common/autotest_common.sh@10 -- # set +x 00:04:51.068 ************************************ 00:04:51.068 START TEST json_config_extra_key 00:04:51.068 ************************************ 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:04:51.068 08:43:13 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:51.068 08:43:13 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.068 08:43:13 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.068 08:43:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.068 08:43:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.068 08:43:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.068 08:43:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:51.068 08:43:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:51.068 08:43:13 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/common.sh 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:51.068 INFO: launching applications... 00:04:51.068 08:43:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1154012 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:51.068 Waiting for target to run... 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1154012 /var/tmp/spdk_tgt.sock 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@830 -- # '[' -z 1154012 ']' 00:04:51.068 08:43:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/extra_key.json 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:51.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:51.068 08:43:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.068 [2024-06-09 08:43:13.411992] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:51.068 [2024-06-09 08:43:13.412037] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154012 ] 00:04:51.068 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.326 [2024-06-09 08:43:13.680871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.326 [2024-06-09 08:43:13.747699] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.892 08:43:14 json_config_extra_key -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:51.892 08:43:14 json_config_extra_key -- common/autotest_common.sh@863 -- # return 0 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:51.892 00:04:51.892 08:43:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:51.892 INFO: shutting down applications... 00:04:51.892 08:43:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1154012 ]] 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1154012 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1154012 00:04:51.892 08:43:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1154012 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:52.150 08:43:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:52.150 SPDK target shutdown done 00:04:52.150 08:43:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:52.150 Success 00:04:52.150 00:04:52.150 real 0m1.425s 00:04:52.150 user 0m1.188s 00:04:52.150 sys 0m0.365s 00:04:52.150 08:43:14 json_config_extra_key -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:52.150 08:43:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.150 ************************************ 00:04:52.150 END TEST json_config_extra_key 00:04:52.150 ************************************ 00:04:52.409 08:43:14 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.409 08:43:14 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:52.409 08:43:14 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:52.409 08:43:14 -- common/autotest_common.sh@10 -- # set +x 00:04:52.409 ************************************ 00:04:52.409 START TEST alias_rpc 00:04:52.409 ************************************ 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:52.409 * Looking for test storage... 00:04:52.409 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/alias_rpc 00:04:52.409 08:43:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.409 08:43:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1154289 00:04:52.409 08:43:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1154289 00:04:52.409 08:43:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@830 -- # '[' -z 1154289 ']' 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:52.409 08:43:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.409 [2024-06-09 08:43:14.903530] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:52.409 [2024-06-09 08:43:14.903581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154289 ] 00:04:52.409 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.409 [2024-06-09 08:43:14.958893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.667 [2024-06-09 08:43:15.037574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.230 08:43:15 alias_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:53.230 08:43:15 alias_rpc -- common/autotest_common.sh@863 -- # return 0 00:04:53.230 08:43:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:53.488 08:43:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1154289 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@949 -- # '[' -z 1154289 ']' 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@953 -- # kill -0 1154289 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@954 -- # uname 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1154289 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1154289' 00:04:53.488 killing process with pid 1154289 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@968 -- # kill 1154289 00:04:53.488 08:43:15 alias_rpc -- common/autotest_common.sh@973 -- # wait 1154289 00:04:53.746 00:04:53.746 real 0m1.466s 00:04:53.746 user 0m1.583s 00:04:53.746 sys 0m0.394s 00:04:53.746 08:43:16 alias_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:53.746 08:43:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 ************************************ 00:04:53.746 END TEST alias_rpc 00:04:53.746 ************************************ 00:04:53.746 08:43:16 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:53.746 08:43:16 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:53.746 08:43:16 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:53.746 08:43:16 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:53.746 08:43:16 -- common/autotest_common.sh@10 -- # set +x 00:04:53.746 ************************************ 00:04:53.746 START TEST spdkcli_tcp 00:04:53.747 ************************************ 00:04:53.747 08:43:16 spdkcli_tcp -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:54.004 * Looking for test storage... 00:04:54.004 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/common.sh 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/json_config/clear_config.py 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@723 -- # xtrace_disable 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1154613 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1154613 00:04:54.004 08:43:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@830 -- # '[' -z 1154613 ']' 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:54.004 08:43:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:54.004 [2024-06-09 08:43:16.446466] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:54.004 [2024-06-09 08:43:16.446516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154613 ] 00:04:54.004 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.004 [2024-06-09 08:43:16.503977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.262 [2024-06-09 08:43:16.584813] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.262 [2024-06-09 08:43:16.584815] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.847 08:43:17 spdkcli_tcp -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:54.847 08:43:17 spdkcli_tcp -- common/autotest_common.sh@863 -- # return 0 00:04:54.847 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1154795 00:04:54.847 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:54.847 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:54.847 [ 00:04:54.847 "bdev_malloc_delete", 00:04:54.847 "bdev_malloc_create", 00:04:54.847 "bdev_null_resize", 00:04:54.847 "bdev_null_delete", 00:04:54.847 "bdev_null_create", 00:04:54.847 "bdev_nvme_cuse_unregister", 00:04:54.847 "bdev_nvme_cuse_register", 00:04:54.847 "bdev_opal_new_user", 00:04:54.847 "bdev_opal_set_lock_state", 00:04:54.847 "bdev_opal_delete", 00:04:54.847 "bdev_opal_get_info", 00:04:54.847 "bdev_opal_create", 00:04:54.847 "bdev_nvme_opal_revert", 00:04:54.847 "bdev_nvme_opal_init", 00:04:54.847 "bdev_nvme_send_cmd", 00:04:54.847 "bdev_nvme_get_path_iostat", 00:04:54.847 "bdev_nvme_get_mdns_discovery_info", 00:04:54.847 "bdev_nvme_stop_mdns_discovery", 00:04:54.847 "bdev_nvme_start_mdns_discovery", 00:04:54.847 "bdev_nvme_set_multipath_policy", 00:04:54.847 "bdev_nvme_set_preferred_path", 00:04:54.848 "bdev_nvme_get_io_paths", 00:04:54.848 "bdev_nvme_remove_error_injection", 00:04:54.848 "bdev_nvme_add_error_injection", 00:04:54.848 "bdev_nvme_get_discovery_info", 00:04:54.848 "bdev_nvme_stop_discovery", 00:04:54.848 "bdev_nvme_start_discovery", 00:04:54.848 "bdev_nvme_get_controller_health_info", 00:04:54.848 "bdev_nvme_disable_controller", 00:04:54.848 "bdev_nvme_enable_controller", 00:04:54.848 "bdev_nvme_reset_controller", 00:04:54.848 "bdev_nvme_get_transport_statistics", 00:04:54.848 "bdev_nvme_apply_firmware", 00:04:54.848 "bdev_nvme_detach_controller", 00:04:54.848 "bdev_nvme_get_controllers", 00:04:54.848 "bdev_nvme_attach_controller", 00:04:54.848 "bdev_nvme_set_hotplug", 00:04:54.848 "bdev_nvme_set_options", 00:04:54.848 "bdev_passthru_delete", 00:04:54.848 "bdev_passthru_create", 00:04:54.848 "bdev_lvol_set_parent_bdev", 00:04:54.848 "bdev_lvol_set_parent", 00:04:54.848 "bdev_lvol_check_shallow_copy", 00:04:54.848 "bdev_lvol_start_shallow_copy", 00:04:54.848 "bdev_lvol_grow_lvstore", 00:04:54.848 "bdev_lvol_get_lvols", 00:04:54.848 "bdev_lvol_get_lvstores", 00:04:54.848 "bdev_lvol_delete", 00:04:54.848 "bdev_lvol_set_read_only", 00:04:54.848 "bdev_lvol_resize", 00:04:54.848 "bdev_lvol_decouple_parent", 00:04:54.848 "bdev_lvol_inflate", 00:04:54.848 "bdev_lvol_rename", 00:04:54.848 "bdev_lvol_clone_bdev", 00:04:54.848 "bdev_lvol_clone", 00:04:54.848 "bdev_lvol_snapshot", 00:04:54.848 "bdev_lvol_create", 00:04:54.848 "bdev_lvol_delete_lvstore", 00:04:54.848 "bdev_lvol_rename_lvstore", 00:04:54.848 "bdev_lvol_create_lvstore", 00:04:54.848 "bdev_raid_set_options", 00:04:54.848 "bdev_raid_remove_base_bdev", 00:04:54.848 "bdev_raid_add_base_bdev", 00:04:54.848 "bdev_raid_delete", 00:04:54.848 "bdev_raid_create", 00:04:54.848 "bdev_raid_get_bdevs", 00:04:54.848 "bdev_error_inject_error", 00:04:54.848 "bdev_error_delete", 00:04:54.848 "bdev_error_create", 00:04:54.848 "bdev_split_delete", 00:04:54.848 "bdev_split_create", 00:04:54.848 "bdev_delay_delete", 00:04:54.848 "bdev_delay_create", 00:04:54.848 "bdev_delay_update_latency", 00:04:54.848 "bdev_zone_block_delete", 00:04:54.848 "bdev_zone_block_create", 00:04:54.848 "blobfs_create", 00:04:54.848 "blobfs_detect", 00:04:54.848 "blobfs_set_cache_size", 00:04:54.848 "bdev_aio_delete", 00:04:54.848 "bdev_aio_rescan", 00:04:54.848 "bdev_aio_create", 00:04:54.848 "bdev_ftl_set_property", 00:04:54.848 "bdev_ftl_get_properties", 00:04:54.848 "bdev_ftl_get_stats", 00:04:54.848 "bdev_ftl_unmap", 00:04:54.848 "bdev_ftl_unload", 00:04:54.848 "bdev_ftl_delete", 00:04:54.848 "bdev_ftl_load", 00:04:54.848 "bdev_ftl_create", 00:04:54.848 "bdev_virtio_attach_controller", 00:04:54.848 "bdev_virtio_scsi_get_devices", 00:04:54.848 "bdev_virtio_detach_controller", 00:04:54.848 "bdev_virtio_blk_set_hotplug", 00:04:54.848 "bdev_iscsi_delete", 00:04:54.848 "bdev_iscsi_create", 00:04:54.848 "bdev_iscsi_set_options", 00:04:54.848 "accel_error_inject_error", 00:04:54.848 "ioat_scan_accel_module", 00:04:54.848 "dsa_scan_accel_module", 00:04:54.848 "iaa_scan_accel_module", 00:04:54.848 "keyring_file_remove_key", 00:04:54.848 "keyring_file_add_key", 00:04:54.848 "keyring_linux_set_options", 00:04:54.848 "iscsi_get_histogram", 00:04:54.848 "iscsi_enable_histogram", 00:04:54.848 "iscsi_set_options", 00:04:54.848 "iscsi_get_auth_groups", 00:04:54.848 "iscsi_auth_group_remove_secret", 00:04:54.848 "iscsi_auth_group_add_secret", 00:04:54.848 "iscsi_delete_auth_group", 00:04:54.848 "iscsi_create_auth_group", 00:04:54.848 "iscsi_set_discovery_auth", 00:04:54.848 "iscsi_get_options", 00:04:54.848 "iscsi_target_node_request_logout", 00:04:54.848 "iscsi_target_node_set_redirect", 00:04:54.848 "iscsi_target_node_set_auth", 00:04:54.848 "iscsi_target_node_add_lun", 00:04:54.848 "iscsi_get_stats", 00:04:54.848 "iscsi_get_connections", 00:04:54.848 "iscsi_portal_group_set_auth", 00:04:54.848 "iscsi_start_portal_group", 00:04:54.848 "iscsi_delete_portal_group", 00:04:54.848 "iscsi_create_portal_group", 00:04:54.848 "iscsi_get_portal_groups", 00:04:54.848 "iscsi_delete_target_node", 00:04:54.848 "iscsi_target_node_remove_pg_ig_maps", 00:04:54.848 "iscsi_target_node_add_pg_ig_maps", 00:04:54.848 "iscsi_create_target_node", 00:04:54.848 "iscsi_get_target_nodes", 00:04:54.848 "iscsi_delete_initiator_group", 00:04:54.848 "iscsi_initiator_group_remove_initiators", 00:04:54.848 "iscsi_initiator_group_add_initiators", 00:04:54.848 "iscsi_create_initiator_group", 00:04:54.849 "iscsi_get_initiator_groups", 00:04:54.849 "nvmf_set_crdt", 00:04:54.849 "nvmf_set_config", 00:04:54.849 "nvmf_set_max_subsystems", 00:04:54.849 "nvmf_stop_mdns_prr", 00:04:54.849 "nvmf_publish_mdns_prr", 00:04:54.849 "nvmf_subsystem_get_listeners", 00:04:54.849 "nvmf_subsystem_get_qpairs", 00:04:54.849 "nvmf_subsystem_get_controllers", 00:04:54.849 "nvmf_get_stats", 00:04:54.849 "nvmf_get_transports", 00:04:54.849 "nvmf_create_transport", 00:04:54.849 "nvmf_get_targets", 00:04:54.849 "nvmf_delete_target", 00:04:54.849 "nvmf_create_target", 00:04:54.849 "nvmf_subsystem_allow_any_host", 00:04:54.849 "nvmf_subsystem_remove_host", 00:04:54.849 "nvmf_subsystem_add_host", 00:04:54.849 "nvmf_ns_remove_host", 00:04:54.849 "nvmf_ns_add_host", 00:04:54.849 "nvmf_subsystem_remove_ns", 00:04:54.849 "nvmf_subsystem_add_ns", 00:04:54.849 "nvmf_subsystem_listener_set_ana_state", 00:04:54.849 "nvmf_discovery_get_referrals", 00:04:54.849 "nvmf_discovery_remove_referral", 00:04:54.849 "nvmf_discovery_add_referral", 00:04:54.849 "nvmf_subsystem_remove_listener", 00:04:54.849 "nvmf_subsystem_add_listener", 00:04:54.849 "nvmf_delete_subsystem", 00:04:54.849 "nvmf_create_subsystem", 00:04:54.849 "nvmf_get_subsystems", 00:04:54.849 "env_dpdk_get_mem_stats", 00:04:54.849 "nbd_get_disks", 00:04:54.849 "nbd_stop_disk", 00:04:54.849 "nbd_start_disk", 00:04:54.849 "ublk_recover_disk", 00:04:54.849 "ublk_get_disks", 00:04:54.849 "ublk_stop_disk", 00:04:54.849 "ublk_start_disk", 00:04:54.849 "ublk_destroy_target", 00:04:54.849 "ublk_create_target", 00:04:54.849 "virtio_blk_create_transport", 00:04:54.849 "virtio_blk_get_transports", 00:04:54.849 "vhost_controller_set_coalescing", 00:04:54.849 "vhost_get_controllers", 00:04:54.849 "vhost_delete_controller", 00:04:54.849 "vhost_create_blk_controller", 00:04:54.849 "vhost_scsi_controller_remove_target", 00:04:54.849 "vhost_scsi_controller_add_target", 00:04:54.849 "vhost_start_scsi_controller", 00:04:54.849 "vhost_create_scsi_controller", 00:04:54.849 "thread_set_cpumask", 00:04:54.849 "framework_get_scheduler", 00:04:54.849 "framework_set_scheduler", 00:04:54.849 "framework_get_reactors", 00:04:54.849 "thread_get_io_channels", 00:04:54.849 "thread_get_pollers", 00:04:54.849 "thread_get_stats", 00:04:54.849 "framework_monitor_context_switch", 00:04:54.849 "spdk_kill_instance", 00:04:54.849 "log_enable_timestamps", 00:04:54.849 "log_get_flags", 00:04:54.849 "log_clear_flag", 00:04:54.849 "log_set_flag", 00:04:54.849 "log_get_level", 00:04:54.849 "log_set_level", 00:04:54.849 "log_get_print_level", 00:04:54.849 "log_set_print_level", 00:04:54.849 "framework_enable_cpumask_locks", 00:04:54.849 "framework_disable_cpumask_locks", 00:04:54.849 "framework_wait_init", 00:04:54.849 "framework_start_init", 00:04:54.849 "scsi_get_devices", 00:04:54.849 "bdev_get_histogram", 00:04:54.849 "bdev_enable_histogram", 00:04:54.849 "bdev_set_qos_limit", 00:04:54.849 "bdev_set_qd_sampling_period", 00:04:54.849 "bdev_get_bdevs", 00:04:54.849 "bdev_reset_iostat", 00:04:54.849 "bdev_get_iostat", 00:04:54.849 "bdev_examine", 00:04:54.849 "bdev_wait_for_examine", 00:04:54.849 "bdev_set_options", 00:04:54.849 "notify_get_notifications", 00:04:54.850 "notify_get_types", 00:04:54.850 "accel_get_stats", 00:04:54.850 "accel_set_options", 00:04:54.850 "accel_set_driver", 00:04:54.850 "accel_crypto_key_destroy", 00:04:54.850 "accel_crypto_keys_get", 00:04:54.850 "accel_crypto_key_create", 00:04:54.850 "accel_assign_opc", 00:04:54.850 "accel_get_module_info", 00:04:54.850 "accel_get_opc_assignments", 00:04:54.850 "vmd_rescan", 00:04:54.850 "vmd_remove_device", 00:04:54.850 "vmd_enable", 00:04:54.850 "sock_get_default_impl", 00:04:54.850 "sock_set_default_impl", 00:04:54.850 "sock_impl_set_options", 00:04:54.850 "sock_impl_get_options", 00:04:54.850 "iobuf_get_stats", 00:04:54.850 "iobuf_set_options", 00:04:54.850 "framework_get_pci_devices", 00:04:54.850 "framework_get_config", 00:04:54.850 "framework_get_subsystems", 00:04:54.850 "trace_get_info", 00:04:54.850 "trace_get_tpoint_group_mask", 00:04:54.850 "trace_disable_tpoint_group", 00:04:54.850 "trace_enable_tpoint_group", 00:04:54.850 "trace_clear_tpoint_mask", 00:04:54.850 "trace_set_tpoint_mask", 00:04:54.850 "keyring_get_keys", 00:04:54.850 "spdk_get_version", 00:04:54.850 "rpc_get_methods" 00:04:54.850 ] 00:04:54.850 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:54.850 08:43:17 spdkcli_tcp -- common/autotest_common.sh@729 -- # xtrace_disable 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.122 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:55.122 08:43:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1154613 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@949 -- # '[' -z 1154613 ']' 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@953 -- # kill -0 1154613 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # uname 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1154613 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1154613' 00:04:55.122 killing process with pid 1154613 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@968 -- # kill 1154613 00:04:55.122 08:43:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # wait 1154613 00:04:55.379 00:04:55.379 real 0m1.479s 00:04:55.379 user 0m2.735s 00:04:55.379 sys 0m0.423s 00:04:55.379 08:43:17 spdkcli_tcp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:55.379 08:43:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.379 ************************************ 00:04:55.379 END TEST spdkcli_tcp 00:04:55.379 ************************************ 00:04:55.379 08:43:17 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.379 08:43:17 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:55.379 08:43:17 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:55.379 08:43:17 -- common/autotest_common.sh@10 -- # set +x 00:04:55.379 ************************************ 00:04:55.379 START TEST dpdk_mem_utility 00:04:55.379 ************************************ 00:04:55.379 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:55.379 * Looking for test storage... 00:04:55.379 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/dpdk_memory_utility 00:04:55.379 08:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:55.379 08:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1155040 00:04:55.379 08:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.379 08:43:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1155040 00:04:55.379 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@830 -- # '[' -z 1155040 ']' 00:04:55.379 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.379 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local max_retries=100 00:04:55.379 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.380 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # xtrace_disable 00:04:55.380 08:43:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:55.638 [2024-06-09 08:43:17.973102] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:55.638 [2024-06-09 08:43:17.973154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155040 ] 00:04:55.638 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.638 [2024-06-09 08:43:18.023118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.638 [2024-06-09 08:43:18.101831] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.205 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:04:56.205 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@863 -- # return 0 00:04:56.205 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:56.205 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:56.205 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:04:56.205 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.464 { 00:04:56.464 "filename": "/tmp/spdk_mem_dump.txt" 00:04:56.464 } 00:04:56.464 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:04:56.464 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:56.464 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:56.464 1 heaps totaling size 814.000000 MiB 00:04:56.464 size: 814.000000 MiB heap id: 0 00:04:56.464 end heaps---------- 00:04:56.464 8 mempools totaling size 598.116089 MiB 00:04:56.464 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:56.464 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:56.464 size: 84.521057 MiB name: bdev_io_1155040 00:04:56.464 size: 51.011292 MiB name: evtpool_1155040 00:04:56.464 size: 50.003479 MiB name: msgpool_1155040 00:04:56.464 size: 21.763794 MiB name: PDU_Pool 00:04:56.464 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:56.464 size: 0.026123 MiB name: Session_Pool 00:04:56.464 end mempools------- 00:04:56.464 6 memzones totaling size 4.142822 MiB 00:04:56.464 size: 1.000366 MiB name: RG_ring_0_1155040 00:04:56.464 size: 1.000366 MiB name: RG_ring_1_1155040 00:04:56.464 size: 1.000366 MiB name: RG_ring_4_1155040 00:04:56.464 size: 1.000366 MiB name: RG_ring_5_1155040 00:04:56.464 size: 0.125366 MiB name: RG_ring_2_1155040 00:04:56.464 size: 0.015991 MiB name: RG_ring_3_1155040 00:04:56.464 end memzones------- 00:04:56.464 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:56.464 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:56.464 list of free elements. size: 12.519348 MiB 00:04:56.464 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:56.464 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:56.464 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:56.464 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:56.464 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:56.464 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:56.464 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:56.464 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:56.465 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:56.465 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:56.465 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:56.465 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:56.465 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:56.465 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:56.465 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:56.465 list of standard malloc elements. size: 199.218079 MiB 00:04:56.465 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:56.465 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:56.465 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:56.465 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:56.465 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:56.465 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:56.465 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:56.465 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:56.465 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:56.465 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:56.465 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:56.465 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:56.465 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:56.465 list of memzone associated elements. size: 602.262573 MiB 00:04:56.465 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:56.465 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:56.465 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:56.465 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:56.465 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:56.465 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1155040_0 00:04:56.465 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:56.465 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1155040_0 00:04:56.465 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:56.465 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1155040_0 00:04:56.465 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:56.465 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:56.465 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:56.465 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:56.465 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:56.465 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1155040 00:04:56.465 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:56.465 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1155040 00:04:56.465 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:56.465 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1155040 00:04:56.465 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:56.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:56.465 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:56.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:56.465 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:56.465 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:56.465 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:56.465 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:56.465 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:56.465 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1155040 00:04:56.465 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:56.465 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1155040 00:04:56.465 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:56.465 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1155040 00:04:56.465 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:56.465 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1155040 00:04:56.465 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:56.465 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1155040 00:04:56.465 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:56.465 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:56.465 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:56.465 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:56.465 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:56.465 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:56.465 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:56.465 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1155040 00:04:56.465 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:56.465 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:56.465 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:56.465 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:56.465 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:56.465 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1155040 00:04:56.465 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:56.465 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:56.465 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:56.465 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1155040 00:04:56.465 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:56.465 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1155040 00:04:56.465 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:56.465 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:56.465 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:56.465 08:43:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1155040 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@949 -- # '[' -z 1155040 ']' 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@953 -- # kill -0 1155040 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # uname 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1155040 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:04:56.465 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1155040' 00:04:56.466 killing process with pid 1155040 00:04:56.466 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@968 -- # kill 1155040 00:04:56.466 08:43:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # wait 1155040 00:04:56.730 00:04:56.730 real 0m1.365s 00:04:56.730 user 0m1.453s 00:04:56.730 sys 0m0.358s 00:04:56.730 08:43:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:56.730 08:43:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:56.730 ************************************ 00:04:56.730 END TEST dpdk_mem_utility 00:04:56.730 ************************************ 00:04:56.730 08:43:19 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:04:56.730 08:43:19 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:04:56.730 08:43:19 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:56.730 08:43:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.730 ************************************ 00:04:56.730 START TEST event 00:04:56.730 ************************************ 00:04:56.730 08:43:19 event -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event.sh 00:04:57.019 * Looking for test storage... 00:04:57.019 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:04:57.019 08:43:19 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:57.019 08:43:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.019 08:43:19 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.019 08:43:19 event -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:04:57.019 08:43:19 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:57.019 08:43:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.019 ************************************ 00:04:57.019 START TEST event_perf 00:04:57.019 ************************************ 00:04:57.019 08:43:19 event.event_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.019 Running I/O for 1 seconds...[2024-06-09 08:43:19.410789] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:57.019 [2024-06-09 08:43:19.410854] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155371 ] 00:04:57.019 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.019 [2024-06-09 08:43:19.470809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.019 [2024-06-09 08:43:19.543075] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.019 [2024-06-09 08:43:19.543175] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.019 [2024-06-09 08:43:19.543265] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.019 [2024-06-09 08:43:19.543267] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.429 Running I/O for 1 seconds... 00:04:58.429 lcore 0: 212743 00:04:58.429 lcore 1: 212742 00:04:58.429 lcore 2: 212742 00:04:58.429 lcore 3: 212743 00:04:58.429 done. 00:04:58.429 00:04:58.429 real 0m1.226s 00:04:58.429 user 0m4.152s 00:04:58.429 sys 0m0.071s 00:04:58.429 08:43:20 event.event_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:58.429 08:43:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.429 ************************************ 00:04:58.429 END TEST event_perf 00:04:58.429 ************************************ 00:04:58.429 08:43:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.429 08:43:20 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:58.429 08:43:20 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:58.429 08:43:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.429 ************************************ 00:04:58.429 START TEST event_reactor 00:04:58.429 ************************************ 00:04:58.429 08:43:20 event.event_reactor -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:58.429 [2024-06-09 08:43:20.697658] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:58.429 [2024-06-09 08:43:20.697731] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155589 ] 00:04:58.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.429 [2024-06-09 08:43:20.756230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.429 [2024-06-09 08:43:20.826287] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.363 test_start 00:04:59.363 oneshot 00:04:59.364 tick 100 00:04:59.364 tick 100 00:04:59.364 tick 250 00:04:59.364 tick 100 00:04:59.364 tick 100 00:04:59.364 tick 250 00:04:59.364 tick 100 00:04:59.364 tick 500 00:04:59.364 tick 100 00:04:59.364 tick 100 00:04:59.364 tick 250 00:04:59.364 tick 100 00:04:59.364 tick 100 00:04:59.364 test_end 00:04:59.364 00:04:59.364 real 0m1.215s 00:04:59.364 user 0m1.137s 00:04:59.364 sys 0m0.075s 00:04:59.364 08:43:21 event.event_reactor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:04:59.364 08:43:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:59.364 ************************************ 00:04:59.364 END TEST event_reactor 00:04:59.364 ************************************ 00:04:59.364 08:43:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.364 08:43:21 event -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:04:59.364 08:43:21 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:04:59.364 08:43:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.622 ************************************ 00:04:59.622 START TEST event_reactor_perf 00:04:59.622 ************************************ 00:04:59.622 08:43:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:59.622 [2024-06-09 08:43:21.971439] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:04:59.622 [2024-06-09 08:43:21.971503] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155773 ] 00:04:59.622 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.622 [2024-06-09 08:43:22.028788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.622 [2024-06-09 08:43:22.098588] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.994 test_start 00:05:00.994 test_end 00:05:00.994 Performance: 525828 events per second 00:05:00.994 00:05:00.994 real 0m1.213s 00:05:00.994 user 0m1.142s 00:05:00.994 sys 0m0.068s 00:05:00.994 08:43:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:00.994 08:43:23 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.994 ************************************ 00:05:00.994 END TEST event_reactor_perf 00:05:00.995 ************************************ 00:05:00.995 08:43:23 event -- event/event.sh@49 -- # uname -s 00:05:00.995 08:43:23 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:00.995 08:43:23 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.995 08:43:23 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:00.995 08:43:23 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:00.995 08:43:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.995 ************************************ 00:05:00.995 START TEST event_scheduler 00:05:00.995 ************************************ 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:00.995 * Looking for test storage... 00:05:00.995 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler 00:05:00.995 08:43:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:00.995 08:43:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1156043 00:05:00.995 08:43:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.995 08:43:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:00.995 08:43:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1156043 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@830 -- # '[' -z 1156043 ']' 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:00.995 08:43:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.995 [2024-06-09 08:43:23.357860] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:00.995 [2024-06-09 08:43:23.357906] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156043 ] 00:05:00.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.995 [2024-06-09 08:43:23.407363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.995 [2024-06-09 08:43:23.487688] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.995 [2024-06-09 08:43:23.487776] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.995 [2024-06-09 08:43:23.487880] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.995 [2024-06-09 08:43:23.487882] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@863 -- # return 0 00:05:01.929 08:43:24 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 POWER: Env isn't set yet! 00:05:01.929 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:01.929 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:01.929 POWER: Cannot set governor of lcore 0 to userspace 00:05:01.929 POWER: Attempting to initialise PSTAT power management... 00:05:01.929 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:01.929 POWER: Initialized successfully for lcore 0 power management 00:05:01.929 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:01.929 POWER: Initialized successfully for lcore 1 power management 00:05:01.929 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:01.929 POWER: Initialized successfully for lcore 2 power management 00:05:01.929 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:01.929 POWER: Initialized successfully for lcore 3 power management 00:05:01.929 [2024-06-09 08:43:24.198878] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:01.929 [2024-06-09 08:43:24.198889] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:01.929 [2024-06-09 08:43:24.198895] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 [2024-06-09 08:43:24.266636] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 ************************************ 00:05:01.929 START TEST scheduler_create_thread 00:05:01.929 ************************************ 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # scheduler_create_thread 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 2 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 3 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 4 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 5 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 6 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 7 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 8 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 9 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 10 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:01.929 08:43:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.862 08:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:02.862 08:43:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:02.862 08:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:02.862 08:43:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.236 08:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:04.236 08:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.236 08:43:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.236 08:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:04.236 08:43:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.170 08:43:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:05.170 00:05:05.170 real 0m3.383s 00:05:05.170 user 0m0.021s 00:05:05.170 sys 0m0.007s 00:05:05.170 08:43:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:05.170 08:43:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.170 ************************************ 00:05:05.170 END TEST scheduler_create_thread 00:05:05.170 ************************************ 00:05:05.170 08:43:27 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:05.170 08:43:27 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1156043 00:05:05.170 08:43:27 event.event_scheduler -- common/autotest_common.sh@949 -- # '[' -z 1156043 ']' 00:05:05.170 08:43:27 event.event_scheduler -- common/autotest_common.sh@953 -- # kill -0 1156043 00:05:05.170 08:43:27 event.event_scheduler -- common/autotest_common.sh@954 -- # uname 00:05:05.170 08:43:27 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:05.170 08:43:27 event.event_scheduler -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1156043 00:05:05.428 08:43:27 event.event_scheduler -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:05.428 08:43:27 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:05.428 08:43:27 event.event_scheduler -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1156043' 00:05:05.428 killing process with pid 1156043 00:05:05.428 08:43:27 event.event_scheduler -- common/autotest_common.sh@968 -- # kill 1156043 00:05:05.428 08:43:27 event.event_scheduler -- common/autotest_common.sh@973 -- # wait 1156043 00:05:05.687 [2024-06-09 08:43:28.066554] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:05.687 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:05.687 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:05.687 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:05.687 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:05.687 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:05.687 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:05.687 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:05.687 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:05.945 00:05:05.945 real 0m5.059s 00:05:05.945 user 0m10.479s 00:05:05.945 sys 0m0.343s 00:05:05.945 08:43:28 event.event_scheduler -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:05.945 08:43:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:05.945 ************************************ 00:05:05.945 END TEST event_scheduler 00:05:05.945 ************************************ 00:05:05.945 08:43:28 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:05.945 08:43:28 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:05.945 08:43:28 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:05.945 08:43:28 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:05.945 08:43:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.945 ************************************ 00:05:05.945 START TEST app_repeat 00:05:05.945 ************************************ 00:05:05.945 08:43:28 event.app_repeat -- common/autotest_common.sh@1124 -- # app_repeat_test 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:05.945 08:43:28 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1156883 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1156883' 00:05:05.946 Process app_repeat pid: 1156883 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:05.946 spdk_app_start Round 0 00:05:05.946 08:43:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1156883 /var/tmp/spdk-nbd.sock 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1156883 ']' 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:05.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:05.946 08:43:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:05.946 [2024-06-09 08:43:28.400948] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:05.946 [2024-06-09 08:43:28.401001] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156883 ] 00:05:05.946 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.946 [2024-06-09 08:43:28.458498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.204 [2024-06-09 08:43:28.532947] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.204 [2024-06-09 08:43:28.532949] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.770 08:43:29 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:06.770 08:43:29 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:06.770 08:43:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.028 Malloc0 00:05:07.029 08:43:29 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.029 Malloc1 00:05:07.029 08:43:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.029 08:43:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.286 /dev/nbd0 00:05:07.286 08:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.286 08:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:07.286 08:43:29 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.287 1+0 records in 00:05:07.287 1+0 records out 00:05:07.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563073 s, 7.3 MB/s 00:05:07.287 08:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:07.287 08:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:07.287 08:43:29 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:07.287 08:43:29 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:07.287 08:43:29 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:07.287 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.287 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.287 08:43:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:07.544 /dev/nbd1 00:05:07.544 08:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:07.544 08:43:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.544 1+0 records in 00:05:07.544 1+0 records out 00:05:07.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194975 s, 21.0 MB/s 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:07.544 08:43:29 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:07.544 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.545 08:43:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.545 08:43:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.545 08:43:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.545 08:43:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:07.803 { 00:05:07.803 "nbd_device": "/dev/nbd0", 00:05:07.803 "bdev_name": "Malloc0" 00:05:07.803 }, 00:05:07.803 { 00:05:07.803 "nbd_device": "/dev/nbd1", 00:05:07.803 "bdev_name": "Malloc1" 00:05:07.803 } 00:05:07.803 ]' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:07.803 { 00:05:07.803 "nbd_device": "/dev/nbd0", 00:05:07.803 "bdev_name": "Malloc0" 00:05:07.803 }, 00:05:07.803 { 00:05:07.803 "nbd_device": "/dev/nbd1", 00:05:07.803 "bdev_name": "Malloc1" 00:05:07.803 } 00:05:07.803 ]' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:07.803 /dev/nbd1' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:07.803 /dev/nbd1' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:07.803 08:43:30 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:07.803 256+0 records in 00:05:07.803 256+0 records out 00:05:07.803 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103572 s, 101 MB/s 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:07.804 256+0 records in 00:05:07.804 256+0 records out 00:05:07.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130689 s, 80.2 MB/s 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.804 256+0 records in 00:05:07.804 256+0 records out 00:05:07.804 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141977 s, 73.9 MB/s 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.804 08:43:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.061 08:43:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.330 08:43:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.330 08:43:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.603 08:43:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:08.862 [2024-06-09 08:43:31.278592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.862 [2024-06-09 08:43:31.344480] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.862 [2024-06-09 08:43:31.344482] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.862 [2024-06-09 08:43:31.384610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:08.862 [2024-06-09 08:43:31.384654] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:12.141 spdk_app_start Round 1 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1156883 /var/tmp/spdk-nbd.sock 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1156883 ']' 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:12.141 08:43:34 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.141 Malloc0 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.141 Malloc1 00:05:12.141 08:43:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.141 08:43:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.399 /dev/nbd0 00:05:12.399 08:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.399 08:43:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.399 1+0 records in 00:05:12.399 1+0 records out 00:05:12.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000168827 s, 24.3 MB/s 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:12.399 08:43:34 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:12.399 08:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.399 08:43:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.399 08:43:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.666 /dev/nbd1 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.667 1+0 records in 00:05:12.667 1+0 records out 00:05:12.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 9.789e-05 s, 41.8 MB/s 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:12.667 08:43:35 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.667 08:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.667 { 00:05:12.667 "nbd_device": "/dev/nbd0", 00:05:12.668 "bdev_name": "Malloc0" 00:05:12.668 }, 00:05:12.668 { 00:05:12.668 "nbd_device": "/dev/nbd1", 00:05:12.668 "bdev_name": "Malloc1" 00:05:12.668 } 00:05:12.668 ]' 00:05:12.668 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.668 { 00:05:12.668 "nbd_device": "/dev/nbd0", 00:05:12.668 "bdev_name": "Malloc0" 00:05:12.668 }, 00:05:12.668 { 00:05:12.668 "nbd_device": "/dev/nbd1", 00:05:12.668 "bdev_name": "Malloc1" 00:05:12.668 } 00:05:12.668 ]' 00:05:12.668 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.930 /dev/nbd1' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.930 /dev/nbd1' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.930 256+0 records in 00:05:12.930 256+0 records out 00:05:12.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010339 s, 101 MB/s 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.930 256+0 records in 00:05:12.930 256+0 records out 00:05:12.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135088 s, 77.6 MB/s 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.930 256+0 records in 00:05:12.930 256+0 records out 00:05:12.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142344 s, 73.7 MB/s 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.930 08:43:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.931 08:43:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.188 08:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.445 08:43:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.446 08:43:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:13.703 08:43:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.961 [2024-06-09 08:43:36.300105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.961 [2024-06-09 08:43:36.365237] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.961 [2024-06-09 08:43:36.365240] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.961 [2024-06-09 08:43:36.406122] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.961 [2024-06-09 08:43:36.406162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.237 spdk_app_start Round 2 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1156883 /var/tmp/spdk-nbd.sock 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1156883 ']' 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:17.237 08:43:39 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.237 Malloc0 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.237 Malloc1 00:05:17.237 08:43:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.237 08:43:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.494 /dev/nbd0 00:05:17.494 08:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.494 08:43:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd0 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd0 /proc/partitions 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.494 1+0 records in 00:05:17.494 1+0 records out 00:05:17.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017573 s, 23.3 MB/s 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:17.494 08:43:39 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:17.494 08:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.494 08:43:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.495 08:43:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.495 /dev/nbd1 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@867 -- # local nbd_name=nbd1 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@868 -- # local i 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@871 -- # grep -q -w nbd1 /proc/partitions 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@872 -- # break 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i = 1 )) 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@883 -- # (( i <= 20 )) 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@884 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.752 1+0 records in 00:05:17.752 1+0 records out 00:05:17.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175539 s, 23.3 MB/s 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@885 -- # stat -c %s /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@885 -- # size=4096 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@886 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdtest 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@887 -- # '[' 4096 '!=' 0 ']' 00:05:17.752 08:43:40 event.app_repeat -- common/autotest_common.sh@888 -- # return 0 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.752 { 00:05:17.752 "nbd_device": "/dev/nbd0", 00:05:17.752 "bdev_name": "Malloc0" 00:05:17.752 }, 00:05:17.752 { 00:05:17.752 "nbd_device": "/dev/nbd1", 00:05:17.752 "bdev_name": "Malloc1" 00:05:17.752 } 00:05:17.752 ]' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.752 { 00:05:17.752 "nbd_device": "/dev/nbd0", 00:05:17.752 "bdev_name": "Malloc0" 00:05:17.752 }, 00:05:17.752 { 00:05:17.752 "nbd_device": "/dev/nbd1", 00:05:17.752 "bdev_name": "Malloc1" 00:05:17.752 } 00:05:17.752 ]' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.752 /dev/nbd1' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.752 /dev/nbd1' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.752 256+0 records in 00:05:17.752 256+0 records out 00:05:17.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100056 s, 105 MB/s 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.752 256+0 records in 00:05:17.752 256+0 records out 00:05:17.752 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139252 s, 75.3 MB/s 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.752 08:43:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.009 256+0 records in 00:05:18.009 256+0 records out 00:05:18.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154697 s, 67.8 MB/s 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/nbdrandtest 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.009 08:43:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.266 08:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.524 08:43:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.524 08:43:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.782 08:43:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:18.782 [2024-06-09 08:43:41.325012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.040 [2024-06-09 08:43:41.391438] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.040 [2024-06-09 08:43:41.391439] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.040 [2024-06-09 08:43:41.431631] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.040 [2024-06-09 08:43:41.431670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.634 08:43:44 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1156883 /var/tmp/spdk-nbd.sock 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@830 -- # '[' -z 1156883 ']' 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:21.634 08:43:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@863 -- # return 0 00:05:21.892 08:43:44 event.app_repeat -- event/event.sh@39 -- # killprocess 1156883 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@949 -- # '[' -z 1156883 ']' 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@953 -- # kill -0 1156883 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@954 -- # uname 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1156883 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1156883' 00:05:21.892 killing process with pid 1156883 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@968 -- # kill 1156883 00:05:21.892 08:43:44 event.app_repeat -- common/autotest_common.sh@973 -- # wait 1156883 00:05:22.150 spdk_app_start is called in Round 0. 00:05:22.150 Shutdown signal received, stop current app iteration 00:05:22.150 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:22.150 spdk_app_start is called in Round 1. 00:05:22.150 Shutdown signal received, stop current app iteration 00:05:22.150 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:22.150 spdk_app_start is called in Round 2. 00:05:22.150 Shutdown signal received, stop current app iteration 00:05:22.150 Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 reinitialization... 00:05:22.150 spdk_app_start is called in Round 3. 00:05:22.150 Shutdown signal received, stop current app iteration 00:05:22.150 08:43:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:22.150 08:43:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:22.150 00:05:22.150 real 0m16.161s 00:05:22.150 user 0m34.958s 00:05:22.150 sys 0m2.346s 00:05:22.150 08:43:44 event.app_repeat -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:22.150 08:43:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 END TEST app_repeat 00:05:22.150 ************************************ 00:05:22.150 08:43:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:22.150 08:43:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.150 08:43:44 event -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.150 08:43:44 event -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.150 08:43:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.150 ************************************ 00:05:22.150 START TEST cpu_locks 00:05:22.150 ************************************ 00:05:22.150 08:43:44 event.cpu_locks -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:22.150 * Looking for test storage... 00:05:22.150 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/event 00:05:22.150 08:43:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:22.150 08:43:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:22.150 08:43:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:22.150 08:43:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:22.150 08:43:44 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:22.150 08:43:44 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:22.150 08:43:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.408 ************************************ 00:05:22.408 START TEST default_locks 00:05:22.408 ************************************ 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # default_locks 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1159838 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1159838 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1159838 ']' 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:22.408 08:43:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.408 [2024-06-09 08:43:44.768378] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:22.408 [2024-06-09 08:43:44.768422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159838 ] 00:05:22.408 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.408 [2024-06-09 08:43:44.819463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.408 [2024-06-09 08:43:44.896610] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 0 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.343 lslocks: write error 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@949 -- # '[' -z 1159838 ']' 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # kill -0 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # uname 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1159838' 00:05:23.343 killing process with pid 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # kill 1159838 00:05:23.343 08:43:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # wait 1159838 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1159838 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1159838 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1159838 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # '[' -z 1159838 ']' 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.602 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1159838) - No such process 00:05:23.602 ERROR: process (pid: 1159838) is no longer running 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@863 -- # return 1 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.602 00:05:23.602 real 0m1.387s 00:05:23.602 user 0m1.458s 00:05:23.602 sys 0m0.418s 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:23.602 08:43:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.602 ************************************ 00:05:23.602 END TEST default_locks 00:05:23.602 ************************************ 00:05:23.602 08:43:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.602 08:43:46 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:23.602 08:43:46 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:23.602 08:43:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.860 ************************************ 00:05:23.860 START TEST default_locks_via_rpc 00:05:23.860 ************************************ 00:05:23.860 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # default_locks_via_rpc 00:05:23.860 08:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1160102 00:05:23.860 08:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1160102 00:05:23.860 08:43:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1160102 ']' 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:23.861 08:43:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.861 [2024-06-09 08:43:46.220218] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:23.861 [2024-06-09 08:43:46.220266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160102 ] 00:05:23.861 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.861 [2024-06-09 08:43:46.272629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.861 [2024-06-09 08:43:46.339791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@949 -- # '[' -z 1160102 ']' 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # kill -0 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # uname 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1160102' 00:05:24.795 killing process with pid 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # kill 1160102 00:05:24.795 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # wait 1160102 00:05:25.054 00:05:25.054 real 0m1.366s 00:05:25.054 user 0m1.430s 00:05:25.054 sys 0m0.402s 00:05:25.054 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:25.054 08:43:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.054 ************************************ 00:05:25.054 END TEST default_locks_via_rpc 00:05:25.054 ************************************ 00:05:25.054 08:43:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.054 08:43:47 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:25.054 08:43:47 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:25.054 08:43:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.054 ************************************ 00:05:25.054 START TEST non_locking_app_on_locked_coremask 00:05:25.054 ************************************ 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # non_locking_app_on_locked_coremask 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1160355 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1160355 /var/tmp/spdk.sock 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1160355 ']' 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:25.054 08:43:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.313 [2024-06-09 08:43:47.655410] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:25.313 [2024-06-09 08:43:47.655452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160355 ] 00:05:25.313 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.313 [2024-06-09 08:43:47.708862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.313 [2024-06-09 08:43:47.782737] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.247 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1160584 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1160584 /var/tmp/spdk2.sock 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1160584 ']' 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:26.248 08:43:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.248 [2024-06-09 08:43:48.493081] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:26.248 [2024-06-09 08:43:48.493126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1160584 ] 00:05:26.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.248 [2024-06-09 08:43:48.567869] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.248 [2024-06-09 08:43:48.567898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.248 [2024-06-09 08:43:48.707564] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.813 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:26.813 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:26.813 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1160355 00:05:26.813 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1160355 00:05:26.813 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.393 lslocks: write error 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1160355 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1160355 ']' 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1160355 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1160355 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:27.393 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:27.394 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1160355' 00:05:27.394 killing process with pid 1160355 00:05:27.394 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1160355 00:05:27.394 08:43:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1160355 00:05:27.959 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1160584 00:05:27.960 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1160584 ']' 00:05:27.960 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1160584 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1160584 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1160584' 00:05:28.218 killing process with pid 1160584 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1160584 00:05:28.218 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1160584 00:05:28.476 00:05:28.476 real 0m3.260s 00:05:28.476 user 0m3.486s 00:05:28.476 sys 0m0.924s 00:05:28.476 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:28.476 08:43:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.476 ************************************ 00:05:28.476 END TEST non_locking_app_on_locked_coremask 00:05:28.476 ************************************ 00:05:28.476 08:43:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:28.476 08:43:50 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:28.476 08:43:50 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:28.476 08:43:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.476 ************************************ 00:05:28.476 START TEST locking_app_on_unlocked_coremask 00:05:28.476 ************************************ 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_unlocked_coremask 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1161072 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1161072 /var/tmp/spdk.sock 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1161072 ']' 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:28.476 08:43:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.476 [2024-06-09 08:43:50.984068] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:28.476 [2024-06-09 08:43:50.984110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161072 ] 00:05:28.476 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.734 [2024-06-09 08:43:51.037698] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.734 [2024-06-09 08:43:51.037722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.734 [2024-06-09 08:43:51.104968] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1161112 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1161112 /var/tmp/spdk2.sock 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1161112 ']' 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:29.301 08:43:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.301 [2024-06-09 08:43:51.803492] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:29.301 [2024-06-09 08:43:51.803537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161112 ] 00:05:29.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.559 [2024-06-09 08:43:51.878061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.559 [2024-06-09 08:43:52.022420] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.126 08:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:30.126 08:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:30.126 08:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1161112 00:05:30.126 08:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1161112 00:05:30.126 08:43:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.691 lslocks: write error 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1161072 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1161072 ']' 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1161072 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:30.691 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1161072 00:05:30.948 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:30.948 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:30.948 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1161072' 00:05:30.948 killing process with pid 1161072 00:05:30.948 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1161072 00:05:30.948 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1161072 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1161112 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1161112 ']' 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # kill -0 1161112 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1161112 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1161112' 00:05:31.515 killing process with pid 1161112 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # kill 1161112 00:05:31.515 08:43:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # wait 1161112 00:05:31.773 00:05:31.773 real 0m3.290s 00:05:31.773 user 0m3.530s 00:05:31.773 sys 0m0.914s 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 ************************************ 00:05:31.773 END TEST locking_app_on_unlocked_coremask 00:05:31.773 ************************************ 00:05:31.773 08:43:54 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:31.773 08:43:54 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:31.773 08:43:54 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:31.773 08:43:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.773 ************************************ 00:05:31.773 START TEST locking_app_on_locked_coremask 00:05:31.773 ************************************ 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # locking_app_on_locked_coremask 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1161573 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1161573 /var/tmp/spdk.sock 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1161573 ']' 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.773 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:31.774 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.774 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:31.774 08:43:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.032 [2024-06-09 08:43:54.332335] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:32.032 [2024-06-09 08:43:54.332370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161573 ] 00:05:32.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.032 [2024-06-09 08:43:54.385517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.032 [2024-06-09 08:43:54.463086] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1161796 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1161796 /var/tmp/spdk2.sock 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1161796 /var/tmp/spdk2.sock 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1161796 /var/tmp/spdk2.sock 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # '[' -z 1161796 ']' 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:32.598 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.856 [2024-06-09 08:43:55.178569] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:32.856 [2024-06-09 08:43:55.178616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161796 ] 00:05:32.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.856 [2024-06-09 08:43:55.246535] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1161573 has claimed it. 00:05:32.857 [2024-06-09 08:43:55.246563] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.422 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1161796) - No such process 00:05:33.422 ERROR: process (pid: 1161796) is no longer running 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1161573 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1161573 00:05:33.422 08:43:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.680 lslocks: write error 00:05:33.680 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1161573 00:05:33.680 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@949 -- # '[' -z 1161573 ']' 00:05:33.680 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # kill -0 1161573 00:05:33.680 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # uname 00:05:33.680 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1161573 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1161573' 00:05:33.681 killing process with pid 1161573 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # kill 1161573 00:05:33.681 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # wait 1161573 00:05:34.247 00:05:34.247 real 0m2.213s 00:05:34.247 user 0m2.430s 00:05:34.247 sys 0m0.588s 00:05:34.247 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:34.247 08:43:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.247 ************************************ 00:05:34.247 END TEST locking_app_on_locked_coremask 00:05:34.247 ************************************ 00:05:34.247 08:43:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:34.247 08:43:56 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:34.247 08:43:56 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:34.247 08:43:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.247 ************************************ 00:05:34.247 START TEST locking_overlapped_coremask 00:05:34.247 ************************************ 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1162049 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1162049 /var/tmp/spdk.sock 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1162049 ']' 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:34.247 08:43:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.247 [2024-06-09 08:43:56.619690] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:34.248 [2024-06-09 08:43:56.619744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162049 ] 00:05:34.248 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.248 [2024-06-09 08:43:56.674022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.248 [2024-06-09 08:43:56.743353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.248 [2024-06-09 08:43:56.743452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.248 [2024-06-09 08:43:56.743452] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 0 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1162223 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1162223 /var/tmp/spdk2.sock 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1162223 /var/tmp/spdk2.sock 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1162223 /var/tmp/spdk2.sock 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # '[' -z 1162223 ']' 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:35.182 08:43:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.182 [2024-06-09 08:43:57.478847] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:35.182 [2024-06-09 08:43:57.478896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162223 ] 00:05:35.182 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.182 [2024-06-09 08:43:57.554715] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1162049 has claimed it. 00:05:35.182 [2024-06-09 08:43:57.554757] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.748 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 845: kill: (1162223) - No such process 00:05:35.748 ERROR: process (pid: 1162223) is no longer running 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@863 -- # return 1 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:35.748 08:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1162049 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@949 -- # '[' -z 1162049 ']' 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # kill -0 1162049 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # uname 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1162049 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1162049' 00:05:35.749 killing process with pid 1162049 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # kill 1162049 00:05:35.749 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # wait 1162049 00:05:36.007 00:05:36.007 real 0m1.887s 00:05:36.007 user 0m5.356s 00:05:36.007 sys 0m0.406s 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.007 ************************************ 00:05:36.007 END TEST locking_overlapped_coremask 00:05:36.007 ************************************ 00:05:36.007 08:43:58 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:36.007 08:43:58 event.cpu_locks -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:36.007 08:43:58 event.cpu_locks -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:36.007 08:43:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.007 ************************************ 00:05:36.007 START TEST locking_overlapped_coremask_via_rpc 00:05:36.007 ************************************ 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # locking_overlapped_coremask_via_rpc 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1162328 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1162328 /var/tmp/spdk.sock 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1162328 ']' 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.007 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.265 [2024-06-09 08:43:58.571996] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:36.265 [2024-06-09 08:43:58.572039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162328 ] 00:05:36.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.265 [2024-06-09 08:43:58.622438] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.265 [2024-06-09 08:43:58.622462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.265 [2024-06-09 08:43:58.694874] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.265 [2024-06-09 08:43:58.694972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.265 [2024-06-09 08:43:58.694975] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1162540 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1162540 /var/tmp/spdk2.sock 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1162540 ']' 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:36.521 08:43:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.521 [2024-06-09 08:43:58.940890] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:36.521 [2024-06-09 08:43:58.940934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162540 ] 00:05:36.521 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.521 [2024-06-09 08:43:59.015087] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:36.521 [2024-06-09 08:43:59.015118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.778 [2024-06-09 08:43:59.160443] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.778 [2024-06-09 08:43:59.163770] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.778 [2024-06-09 08:43:59.163771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.342 [2024-06-09 08:43:59.760791] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1162328 has claimed it. 00:05:37.342 request: 00:05:37.342 { 00:05:37.342 "method": "framework_enable_cpumask_locks", 00:05:37.342 "req_id": 1 00:05:37.342 } 00:05:37.342 Got JSON-RPC error response 00:05:37.342 response: 00:05:37.342 { 00:05:37.342 "code": -32603, 00:05:37.342 "message": "Failed to claim CPU core: 2" 00:05:37.342 } 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1162328 /var/tmp/spdk.sock 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1162328 ']' 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:37.342 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1162540 /var/tmp/spdk2.sock 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # '[' -z 1162540 ']' 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:37.599 08:43:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@863 -- # return 0 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:37.599 00:05:37.599 real 0m1.618s 00:05:37.599 user 0m0.746s 00:05:37.599 sys 0m0.131s 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:37.599 08:44:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.599 ************************************ 00:05:37.599 END TEST locking_overlapped_coremask_via_rpc 00:05:37.599 ************************************ 00:05:37.856 08:44:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:37.856 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1162328 ]] 00:05:37.856 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1162328 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1162328 ']' 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1162328 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1162328 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1162328' 00:05:37.856 killing process with pid 1162328 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1162328 00:05:37.856 08:44:00 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1162328 00:05:38.113 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1162540 ]] 00:05:38.113 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1162540 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1162540 ']' 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1162540 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # uname 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1162540 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1162540' 00:05:38.113 killing process with pid 1162540 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@968 -- # kill 1162540 00:05:38.113 08:44:00 event.cpu_locks -- common/autotest_common.sh@973 -- # wait 1162540 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1162328 ]] 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1162328 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1162328 ']' 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1162328 00:05:38.370 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1162328) - No such process 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1162328 is not found' 00:05:38.370 Process with pid 1162328 is not found 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1162540 ]] 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1162540 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@949 -- # '[' -z 1162540 ']' 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@953 -- # kill -0 1162540 00:05:38.370 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1162540) - No such process 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@976 -- # echo 'Process with pid 1162540 is not found' 00:05:38.370 Process with pid 1162540 is not found 00:05:38.370 08:44:00 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.370 00:05:38.370 real 0m16.305s 00:05:38.370 user 0m27.518s 00:05:38.370 sys 0m4.658s 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.370 08:44:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.370 ************************************ 00:05:38.370 END TEST cpu_locks 00:05:38.370 ************************************ 00:05:38.628 00:05:38.628 real 0m41.660s 00:05:38.628 user 1m19.577s 00:05:38.628 sys 0m7.881s 00:05:38.628 08:44:00 event -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:38.628 08:44:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 ************************************ 00:05:38.628 END TEST event 00:05:38.628 ************************************ 00:05:38.628 08:44:00 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:05:38.628 08:44:00 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:38.628 08:44:00 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.628 08:44:00 -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 ************************************ 00:05:38.628 START TEST thread 00:05:38.628 ************************************ 00:05:38.628 08:44:00 thread -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/thread.sh 00:05:38.628 * Looking for test storage... 00:05:38.628 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread 00:05:38.628 08:44:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.628 08:44:01 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:05:38.628 08:44:01 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:38.628 08:44:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.628 ************************************ 00:05:38.628 START TEST thread_poller_perf 00:05:38.628 ************************************ 00:05:38.628 08:44:01 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.628 [2024-06-09 08:44:01.111416] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:38.628 [2024-06-09 08:44:01.111481] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1162884 ] 00:05:38.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.628 [2024-06-09 08:44:01.168558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.885 [2024-06-09 08:44:01.240972] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.885 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.816 ====================================== 00:05:39.816 busy:2107263876 (cyc) 00:05:39.816 total_run_count: 424000 00:05:39.816 tsc_hz: 2100000000 (cyc) 00:05:39.816 ====================================== 00:05:39.816 poller_cost: 4969 (cyc), 2366 (nsec) 00:05:39.816 00:05:39.816 real 0m1.224s 00:05:39.816 user 0m1.147s 00:05:39.816 sys 0m0.073s 00:05:39.816 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:39.816 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.816 ************************************ 00:05:39.816 END TEST thread_poller_perf 00:05:39.817 ************************************ 00:05:39.817 08:44:02 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.817 08:44:02 thread -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:05:39.817 08:44:02 thread -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:39.817 08:44:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.074 ************************************ 00:05:40.074 START TEST thread_poller_perf 00:05:40.074 ************************************ 00:05:40.074 08:44:02 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:40.074 [2024-06-09 08:44:02.405446] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:40.074 [2024-06-09 08:44:02.405510] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163128 ] 00:05:40.074 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.074 [2024-06-09 08:44:02.465120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.074 [2024-06-09 08:44:02.534143] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.074 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:41.067 ====================================== 00:05:41.067 busy:2101297272 (cyc) 00:05:41.067 total_run_count: 5427000 00:05:41.067 tsc_hz: 2100000000 (cyc) 00:05:41.067 ====================================== 00:05:41.067 poller_cost: 387 (cyc), 184 (nsec) 00:05:41.067 00:05:41.067 real 0m1.221s 00:05:41.067 user 0m1.135s 00:05:41.067 sys 0m0.082s 00:05:41.067 08:44:03 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:41.067 08:44:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.067 ************************************ 00:05:41.067 END TEST thread_poller_perf 00:05:41.067 ************************************ 00:05:41.352 08:44:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:41.352 00:05:41.352 real 0m2.648s 00:05:41.352 user 0m2.361s 00:05:41.352 sys 0m0.290s 00:05:41.352 08:44:03 thread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:41.352 08:44:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.352 ************************************ 00:05:41.352 END TEST thread 00:05:41.352 ************************************ 00:05:41.352 08:44:03 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:05:41.352 08:44:03 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:05:41.352 08:44:03 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:41.352 08:44:03 -- common/autotest_common.sh@10 -- # set +x 00:05:41.352 ************************************ 00:05:41.352 START TEST accel 00:05:41.352 ************************************ 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel.sh 00:05:41.352 * Looking for test storage... 00:05:41.352 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:05:41.352 08:44:03 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:41.352 08:44:03 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:41.352 08:44:03 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.352 08:44:03 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1163415 00:05:41.352 08:44:03 accel -- accel/accel.sh@63 -- # waitforlisten 1163415 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@830 -- # '[' -z 1163415 ']' 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.352 08:44:03 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@835 -- # local max_retries=100 00:05:41.352 08:44:03 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.352 08:44:03 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@839 -- # xtrace_disable 00:05:41.352 08:44:03 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.352 08:44:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.352 08:44:03 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.352 08:44:03 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.352 08:44:03 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.352 08:44:03 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:41.352 08:44:03 accel -- accel/accel.sh@41 -- # jq -r . 00:05:41.352 [2024-06-09 08:44:03.837960] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:41.352 [2024-06-09 08:44:03.838006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163415 ] 00:05:41.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.352 [2024-06-09 08:44:03.892288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.611 [2024-06-09 08:44:03.966861] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.177 08:44:04 accel -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:05:42.177 08:44:04 accel -- common/autotest_common.sh@863 -- # return 0 00:05:42.177 08:44:04 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:42.177 08:44:04 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:42.177 08:44:04 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:42.177 08:44:04 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:42.177 08:44:04 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:42.177 08:44:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:42.177 08:44:04 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:42.177 08:44:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.177 08:44:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:42.177 08:44:04 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.177 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.177 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.177 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.178 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.178 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.178 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.178 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.178 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.178 08:44:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.178 08:44:04 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.178 08:44:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.178 08:44:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.178 08:44:04 accel -- accel/accel.sh@75 -- # killprocess 1163415 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@949 -- # '[' -z 1163415 ']' 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@953 -- # kill -0 1163415 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@954 -- # uname 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1163415 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1163415' 00:05:42.178 killing process with pid 1163415 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@968 -- # kill 1163415 00:05:42.178 08:44:04 accel -- common/autotest_common.sh@973 -- # wait 1163415 00:05:42.744 08:44:05 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:42.744 08:44:05 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.744 08:44:05 accel.accel_help -- common/autotest_common.sh@1124 -- # accel_perf -h 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:42.744 08:44:05 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:42.744 08:44:05 accel.accel_help -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:42.744 08:44:05 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:42.744 08:44:05 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:42.744 08:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.744 ************************************ 00:05:42.744 START TEST accel_missing_filename 00:05:42.744 ************************************ 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:42.744 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:42.744 08:44:05 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:42.744 [2024-06-09 08:44:05.175270] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:42.744 [2024-06-09 08:44:05.175322] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163683 ] 00:05:42.744 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.744 [2024-06-09 08:44:05.231614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.003 [2024-06-09 08:44:05.304347] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.003 [2024-06-09 08:44:05.344646] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.003 [2024-06-09 08:44:05.403589] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:43.003 A filename is required. 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.003 00:05:43.003 real 0m0.326s 00:05:43.003 user 0m0.247s 00:05:43.003 sys 0m0.114s 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.003 08:44:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 ************************************ 00:05:43.003 END TEST accel_missing_filename 00:05:43.003 ************************************ 00:05:43.003 08:44:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:05:43.003 08:44:05 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:05:43.003 08:44:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.003 08:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.003 ************************************ 00:05:43.003 START TEST accel_compress_verify 00:05:43.003 ************************************ 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.003 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:43.003 08:44:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:43.003 [2024-06-09 08:44:05.559266] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:43.003 [2024-06-09 08:44:05.559313] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163831 ] 00:05:43.259 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.259 [2024-06-09 08:44:05.614058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.259 [2024-06-09 08:44:05.685661] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.259 [2024-06-09 08:44:05.726143] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.259 [2024-06-09 08:44:05.785810] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:05:43.516 00:05:43.516 Compression does not support the verify option, aborting. 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.516 00:05:43.516 real 0m0.325s 00:05:43.516 user 0m0.245s 00:05:43.516 sys 0m0.116s 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.516 08:44:05 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:43.516 ************************************ 00:05:43.516 END TEST accel_compress_verify 00:05:43.516 ************************************ 00:05:43.516 08:44:05 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:43.516 08:44:05 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:43.516 08:44:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.516 08:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.516 ************************************ 00:05:43.516 START TEST accel_wrong_workload 00:05:43.516 ************************************ 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w foobar 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:43.516 08:44:05 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:43.516 Unsupported workload type: foobar 00:05:43.516 [2024-06-09 08:44:05.945136] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:43.516 accel_perf options: 00:05:43.516 [-h help message] 00:05:43.516 [-q queue depth per core] 00:05:43.516 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:43.516 [-T number of threads per core 00:05:43.516 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:43.516 [-t time in seconds] 00:05:43.516 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:43.516 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:43.516 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:43.516 [-l for compress/decompress workloads, name of uncompressed input file 00:05:43.516 [-S for crc32c workload, use this seed value (default 0) 00:05:43.516 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:43.516 [-f for fill workload, use this BYTE value (default 255) 00:05:43.516 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:43.516 [-y verify result if this switch is on] 00:05:43.516 [-a tasks to allocate per core (default: same value as -q)] 00:05:43.516 Can be used to spread operations across a wider range of memory. 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.516 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:43.517 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.517 00:05:43.517 real 0m0.033s 00:05:43.517 user 0m0.023s 00:05:43.517 sys 0m0.010s 00:05:43.517 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.517 08:44:05 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:43.517 ************************************ 00:05:43.517 END TEST accel_wrong_workload 00:05:43.517 ************************************ 00:05:43.517 Error: writing output failed: Broken pipe 00:05:43.517 08:44:05 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.517 08:44:05 accel -- common/autotest_common.sh@1100 -- # '[' 10 -le 1 ']' 00:05:43.517 08:44:05 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.517 08:44:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.517 ************************************ 00:05:43.517 START TEST accel_negative_buffers 00:05:43.517 ************************************ 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:43.517 08:44:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:43.517 -x option must be non-negative. 00:05:43.517 [2024-06-09 08:44:06.047947] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:43.517 accel_perf options: 00:05:43.517 [-h help message] 00:05:43.517 [-q queue depth per core] 00:05:43.517 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:43.517 [-T number of threads per core 00:05:43.517 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:43.517 [-t time in seconds] 00:05:43.517 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:43.517 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:43.517 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:43.517 [-l for compress/decompress workloads, name of uncompressed input file 00:05:43.517 [-S for crc32c workload, use this seed value (default 0) 00:05:43.517 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:43.517 [-f for fill workload, use this BYTE value (default 255) 00:05:43.517 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:43.517 [-y verify result if this switch is on] 00:05:43.517 [-a tasks to allocate per core (default: same value as -q)] 00:05:43.517 Can be used to spread operations across a wider range of memory. 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.517 00:05:43.517 real 0m0.036s 00:05:43.517 user 0m0.023s 00:05:43.517 sys 0m0.013s 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:43.517 08:44:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:43.517 ************************************ 00:05:43.517 END TEST accel_negative_buffers 00:05:43.517 ************************************ 00:05:43.517 Error: writing output failed: Broken pipe 00:05:43.775 08:44:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:43.775 08:44:06 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:43.775 08:44:06 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:43.775 08:44:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.775 ************************************ 00:05:43.775 START TEST accel_crc32c 00:05:43.775 ************************************ 00:05:43.775 08:44:06 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:43.775 [2024-06-09 08:44:06.148853] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:43.775 [2024-06-09 08:44:06.148918] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1163987 ] 00:05:43.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.775 [2024-06-09 08:44:06.209876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.775 [2024-06-09 08:44:06.290149] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:43.775 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.033 08:44:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:44.966 08:44:07 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.966 00:05:44.966 real 0m1.346s 00:05:44.967 user 0m1.234s 00:05:44.967 sys 0m0.125s 00:05:44.967 08:44:07 accel.accel_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:44.967 08:44:07 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:44.967 ************************************ 00:05:44.967 END TEST accel_crc32c 00:05:44.967 ************************************ 00:05:44.967 08:44:07 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.967 08:44:07 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:44.967 08:44:07 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:44.967 08:44:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.225 ************************************ 00:05:45.225 START TEST accel_crc32c_C2 00:05:45.225 ************************************ 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.225 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:45.225 [2024-06-09 08:44:07.550949] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:45.225 [2024-06-09 08:44:07.551001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164231 ] 00:05:45.225 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.225 [2024-06-09 08:44:07.605101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.225 [2024-06-09 08:44:07.675056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.226 08:44:07 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.601 00:05:46.601 real 0m1.327s 00:05:46.601 user 0m1.227s 00:05:46.601 sys 0m0.114s 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:46.601 08:44:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:46.601 ************************************ 00:05:46.601 END TEST accel_crc32c_C2 00:05:46.601 ************************************ 00:05:46.601 08:44:08 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:46.601 08:44:08 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:46.601 08:44:08 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:46.601 08:44:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.601 ************************************ 00:05:46.601 START TEST accel_copy 00:05:46.601 ************************************ 00:05:46.601 08:44:08 accel.accel_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy -y 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:46.601 08:44:08 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:46.601 [2024-06-09 08:44:08.915209] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:46.601 [2024-06-09 08:44:08.915245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164476 ] 00:05:46.601 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.601 [2024-06-09 08:44:08.968576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.601 [2024-06-09 08:44:09.038691] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.601 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:46.602 08:44:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:47.975 08:44:10 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.975 00:05:47.975 real 0m1.314s 00:05:47.975 user 0m1.221s 00:05:47.975 sys 0m0.105s 00:05:47.975 08:44:10 accel.accel_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:47.975 08:44:10 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:47.975 ************************************ 00:05:47.975 END TEST accel_copy 00:05:47.975 ************************************ 00:05:47.975 08:44:10 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.975 08:44:10 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:05:47.975 08:44:10 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:47.975 08:44:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.975 ************************************ 00:05:47.975 START TEST accel_fill 00:05:47.975 ************************************ 00:05:47.975 08:44:10 accel.accel_fill -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:47.975 [2024-06-09 08:44:10.301666] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:47.975 [2024-06-09 08:44:10.301711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164728 ] 00:05:47.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.975 [2024-06-09 08:44:10.356834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.975 [2024-06-09 08:44:10.427778] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.975 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:47.976 08:44:10 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:49.351 08:44:11 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.351 00:05:49.351 real 0m1.328s 00:05:49.351 user 0m1.238s 00:05:49.351 sys 0m0.105s 00:05:49.351 08:44:11 accel.accel_fill -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:49.351 08:44:11 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:49.351 ************************************ 00:05:49.351 END TEST accel_fill 00:05:49.351 ************************************ 00:05:49.351 08:44:11 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:49.351 08:44:11 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:49.351 08:44:11 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:49.351 08:44:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.351 ************************************ 00:05:49.351 START TEST accel_copy_crc32c 00:05:49.351 ************************************ 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:49.351 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:49.352 [2024-06-09 08:44:11.688435] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:49.352 [2024-06-09 08:44:11.688479] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1164970 ] 00:05:49.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.352 [2024-06-09 08:44:11.742381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.352 [2024-06-09 08:44:11.812465] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:49.352 08:44:11 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.725 00:05:50.725 real 0m1.325s 00:05:50.725 user 0m1.216s 00:05:50.725 sys 0m0.123s 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:50.725 08:44:12 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:50.725 ************************************ 00:05:50.725 END TEST accel_copy_crc32c 00:05:50.725 ************************************ 00:05:50.725 08:44:13 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:50.725 08:44:13 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:50.725 08:44:13 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:50.725 08:44:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.725 ************************************ 00:05:50.725 START TEST accel_copy_crc32c_C2 00:05:50.725 ************************************ 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:50.725 [2024-06-09 08:44:13.077354] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:50.725 [2024-06-09 08:44:13.077408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165218 ] 00:05:50.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.725 [2024-06-09 08:44:13.132305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.725 [2024-06-09 08:44:13.200801] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.725 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:50.726 08:44:13 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.099 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.100 00:05:52.100 real 0m1.327s 00:05:52.100 user 0m1.229s 00:05:52.100 sys 0m0.112s 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:52.100 08:44:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:52.100 ************************************ 00:05:52.100 END TEST accel_copy_crc32c_C2 00:05:52.100 ************************************ 00:05:52.100 08:44:14 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:52.100 08:44:14 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:52.100 08:44:14 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:52.100 08:44:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.100 ************************************ 00:05:52.100 START TEST accel_dualcast 00:05:52.100 ************************************ 00:05:52.100 08:44:14 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dualcast -y 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:52.100 [2024-06-09 08:44:14.467421] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:52.100 [2024-06-09 08:44:14.467487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165466 ] 00:05:52.100 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.100 [2024-06-09 08:44:14.523803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.100 [2024-06-09 08:44:14.593286] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:52.100 08:44:14 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:53.474 08:44:15 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.474 00:05:53.474 real 0m1.331s 00:05:53.474 user 0m1.224s 00:05:53.474 sys 0m0.121s 00:05:53.474 08:44:15 accel.accel_dualcast -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:53.474 08:44:15 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:53.474 ************************************ 00:05:53.474 END TEST accel_dualcast 00:05:53.474 ************************************ 00:05:53.474 08:44:15 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:53.474 08:44:15 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:53.474 08:44:15 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:53.474 08:44:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.474 ************************************ 00:05:53.474 START TEST accel_compare 00:05:53.474 ************************************ 00:05:53.474 08:44:15 accel.accel_compare -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compare -y 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:53.474 08:44:15 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:53.474 [2024-06-09 08:44:15.857916] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:53.474 [2024-06-09 08:44:15.857975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165710 ] 00:05:53.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.474 [2024-06-09 08:44:15.914395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.474 [2024-06-09 08:44:15.984167] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.474 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.732 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:53.733 08:44:16 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:54.667 08:44:17 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.667 00:05:54.667 real 0m1.331s 00:05:54.667 user 0m1.228s 00:05:54.667 sys 0m0.115s 00:05:54.667 08:44:17 accel.accel_compare -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:54.667 08:44:17 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:54.667 ************************************ 00:05:54.667 END TEST accel_compare 00:05:54.667 ************************************ 00:05:54.667 08:44:17 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:54.667 08:44:17 accel -- common/autotest_common.sh@1100 -- # '[' 7 -le 1 ']' 00:05:54.667 08:44:17 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:54.667 08:44:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.667 ************************************ 00:05:54.667 START TEST accel_xor 00:05:54.667 ************************************ 00:05:54.667 08:44:17 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y 00:05:54.667 08:44:17 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:54.667 08:44:17 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:54.667 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.667 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.667 08:44:17 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:54.926 [2024-06-09 08:44:17.248819] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:54.926 [2024-06-09 08:44:17.248863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1165954 ] 00:05:54.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.926 [2024-06-09 08:44:17.303754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.926 [2024-06-09 08:44:17.373615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:54.926 08:44:17 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.300 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.301 00:05:56.301 real 0m1.329s 00:05:56.301 user 0m1.230s 00:05:56.301 sys 0m0.113s 00:05:56.301 08:44:18 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:56.301 08:44:18 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:56.301 ************************************ 00:05:56.301 END TEST accel_xor 00:05:56.301 ************************************ 00:05:56.301 08:44:18 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:56.301 08:44:18 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:05:56.301 08:44:18 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:56.301 08:44:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.301 ************************************ 00:05:56.301 START TEST accel_xor 00:05:56.301 ************************************ 00:05:56.301 08:44:18 accel.accel_xor -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w xor -y -x 3 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:56.301 [2024-06-09 08:44:18.642613] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:56.301 [2024-06-09 08:44:18.642665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166205 ] 00:05:56.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.301 [2024-06-09 08:44:18.699795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.301 [2024-06-09 08:44:18.769685] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:56.301 08:44:18 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:57.676 08:44:19 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.676 00:05:57.676 real 0m1.331s 00:05:57.676 user 0m1.231s 00:05:57.676 sys 0m0.114s 00:05:57.676 08:44:19 accel.accel_xor -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:57.676 08:44:19 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:57.676 ************************************ 00:05:57.676 END TEST accel_xor 00:05:57.676 ************************************ 00:05:57.676 08:44:19 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:57.676 08:44:19 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:57.676 08:44:19 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:57.676 08:44:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.676 ************************************ 00:05:57.676 START TEST accel_dif_verify 00:05:57.676 ************************************ 00:05:57.676 08:44:20 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_verify 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:57.676 08:44:20 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:57.676 [2024-06-09 08:44:20.034639] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:57.677 [2024-06-09 08:44:20.034713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166447 ] 00:05:57.677 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.677 [2024-06-09 08:44:20.090538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.677 [2024-06-09 08:44:20.160791] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:57.677 08:44:20 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:59.052 08:44:21 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.052 00:05:59.052 real 0m1.329s 00:05:59.052 user 0m1.222s 00:05:59.052 sys 0m0.114s 00:05:59.052 08:44:21 accel.accel_dif_verify -- common/autotest_common.sh@1125 -- # xtrace_disable 00:05:59.052 08:44:21 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:59.052 ************************************ 00:05:59.052 END TEST accel_dif_verify 00:05:59.052 ************************************ 00:05:59.052 08:44:21 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:59.052 08:44:21 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:05:59.052 08:44:21 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:05:59.052 08:44:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.052 ************************************ 00:05:59.052 START TEST accel_dif_generate 00:05:59.052 ************************************ 00:05:59.052 08:44:21 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:59.052 [2024-06-09 08:44:21.416014] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:05:59.052 [2024-06-09 08:44:21.416060] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166697 ] 00:05:59.052 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.052 [2024-06-09 08:44:21.470408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.052 [2024-06-09 08:44:21.540413] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.052 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:59.053 08:44:21 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:00.427 08:44:22 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.427 00:06:00.427 real 0m1.323s 00:06:00.427 user 0m1.222s 00:06:00.427 sys 0m0.107s 00:06:00.427 08:44:22 accel.accel_dif_generate -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:00.428 08:44:22 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:00.428 ************************************ 00:06:00.428 END TEST accel_dif_generate 00:06:00.428 ************************************ 00:06:00.428 08:44:22 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:00.428 08:44:22 accel -- common/autotest_common.sh@1100 -- # '[' 6 -le 1 ']' 00:06:00.428 08:44:22 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:00.428 08:44:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.428 ************************************ 00:06:00.428 START TEST accel_dif_generate_copy 00:06:00.428 ************************************ 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w dif_generate_copy 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:00.428 [2024-06-09 08:44:22.785348] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:00.428 [2024-06-09 08:44:22.785392] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166946 ] 00:06:00.428 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.428 [2024-06-09 08:44:22.838319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.428 [2024-06-09 08:44:22.908353] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:00.428 08:44:22 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.809 00:06:01.809 real 0m1.312s 00:06:01.809 user 0m1.215s 00:06:01.809 sys 0m0.103s 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:01.809 08:44:24 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:01.809 ************************************ 00:06:01.809 END TEST accel_dif_generate_copy 00:06:01.809 ************************************ 00:06:01.809 08:44:24 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:01.809 08:44:24 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:01.809 08:44:24 accel -- common/autotest_common.sh@1100 -- # '[' 8 -le 1 ']' 00:06:01.809 08:44:24 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:01.809 08:44:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:01.809 ************************************ 00:06:01.809 START TEST accel_comp 00:06:01.809 ************************************ 00:06:01.809 08:44:24 accel.accel_comp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:01.809 [2024-06-09 08:44:24.162835] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:01.809 [2024-06-09 08:44:24.162899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167190 ] 00:06:01.809 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.809 [2024-06-09 08:44:24.218460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.809 [2024-06-09 08:44:24.288601] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.809 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:01.810 08:44:24 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:03.184 08:44:25 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.184 00:06:03.184 real 0m1.325s 00:06:03.184 user 0m1.216s 00:06:03.184 sys 0m0.112s 00:06:03.184 08:44:25 accel.accel_comp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:03.184 08:44:25 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:03.184 ************************************ 00:06:03.184 END TEST accel_comp 00:06:03.184 ************************************ 00:06:03.184 08:44:25 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:03.184 08:44:25 accel -- common/autotest_common.sh@1100 -- # '[' 9 -le 1 ']' 00:06:03.184 08:44:25 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:03.184 08:44:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.184 ************************************ 00:06:03.184 START TEST accel_decomp 00:06:03.184 ************************************ 00:06:03.184 08:44:25 accel.accel_decomp -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:03.184 [2024-06-09 08:44:25.541116] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:03.184 [2024-06-09 08:44:25.541161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167439 ] 00:06:03.184 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.184 [2024-06-09 08:44:25.594403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.184 [2024-06-09 08:44:25.664403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.184 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:03.185 08:44:25 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:04.618 08:44:26 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.618 00:06:04.618 real 0m1.319s 00:06:04.618 user 0m1.214s 00:06:04.618 sys 0m0.107s 00:06:04.618 08:44:26 accel.accel_decomp -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:04.618 08:44:26 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 ************************************ 00:06:04.618 END TEST accel_decomp 00:06:04.618 ************************************ 00:06:04.618 08:44:26 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:04.618 08:44:26 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:04.618 08:44:26 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:04.618 08:44:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.618 ************************************ 00:06:04.618 START TEST accel_decomp_full 00:06:04.618 ************************************ 00:06:04.618 08:44:26 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:04.618 08:44:26 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:04.618 [2024-06-09 08:44:26.913007] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:04.618 [2024-06-09 08:44:26.913074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167687 ] 00:06:04.618 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.618 [2024-06-09 08:44:26.968001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.618 [2024-06-09 08:44:27.038400] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.618 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:04.619 08:44:27 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.992 08:44:28 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.992 00:06:05.992 real 0m1.336s 00:06:05.992 user 0m1.223s 00:06:05.992 sys 0m0.114s 00:06:05.992 08:44:28 accel.accel_decomp_full -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:05.992 08:44:28 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:05.992 ************************************ 00:06:05.992 END TEST accel_decomp_full 00:06:05.992 ************************************ 00:06:05.992 08:44:28 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.992 08:44:28 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:05.992 08:44:28 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:05.992 08:44:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.992 ************************************ 00:06:05.992 START TEST accel_decomp_mcore 00:06:05.992 ************************************ 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:05.992 [2024-06-09 08:44:28.306737] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:05.992 [2024-06-09 08:44:28.306801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1167932 ] 00:06:05.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.992 [2024-06-09 08:44:28.362183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.992 [2024-06-09 08:44:28.434914] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.992 [2024-06-09 08:44:28.435007] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.992 [2024-06-09 08:44:28.435098] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.992 [2024-06-09 08:44:28.435100] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.992 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.993 08:44:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.367 00:06:07.367 real 0m1.343s 00:06:07.367 user 0m4.562s 00:06:07.367 sys 0m0.120s 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:07.367 08:44:29 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:07.367 ************************************ 00:06:07.367 END TEST accel_decomp_mcore 00:06:07.367 ************************************ 00:06:07.367 08:44:29 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.367 08:44:29 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:07.367 08:44:29 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:07.367 08:44:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.367 ************************************ 00:06:07.367 START TEST accel_decomp_full_mcore 00:06:07.367 ************************************ 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:07.367 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:07.368 [2024-06-09 08:44:29.714026] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:07.368 [2024-06-09 08:44:29.714075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168183 ] 00:06:07.368 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.368 [2024-06-09 08:44:29.768816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:07.368 [2024-06-09 08:44:29.841455] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.368 [2024-06-09 08:44:29.841550] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.368 [2024-06-09 08:44:29.841640] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.368 [2024-06-09 08:44:29.841642] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:07.368 08:44:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.743 00:06:08.743 real 0m1.353s 00:06:08.743 user 0m4.594s 00:06:08.743 sys 0m0.124s 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:08.743 08:44:31 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:08.743 ************************************ 00:06:08.743 END TEST accel_decomp_full_mcore 00:06:08.743 ************************************ 00:06:08.743 08:44:31 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.743 08:44:31 accel -- common/autotest_common.sh@1100 -- # '[' 11 -le 1 ']' 00:06:08.743 08:44:31 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:08.744 08:44:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.744 ************************************ 00:06:08.744 START TEST accel_decomp_mthread 00:06:08.744 ************************************ 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:08.744 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:08.744 [2024-06-09 08:44:31.136640] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:08.744 [2024-06-09 08:44:31.136692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168430 ] 00:06:08.744 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.744 [2024-06-09 08:44:31.194312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.744 [2024-06-09 08:44:31.268233] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.002 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.003 08:44:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.936 00:06:09.936 real 0m1.342s 00:06:09.936 user 0m1.234s 00:06:09.936 sys 0m0.123s 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:09.936 08:44:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:09.936 ************************************ 00:06:09.936 END TEST accel_decomp_mthread 00:06:09.936 ************************************ 00:06:09.936 08:44:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:09.936 08:44:32 accel -- common/autotest_common.sh@1100 -- # '[' 13 -le 1 ']' 00:06:09.936 08:44:32 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:09.936 08:44:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.195 ************************************ 00:06:10.195 START TEST accel_decomp_full_mthread 00:06:10.195 ************************************ 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:10.195 [2024-06-09 08:44:32.544777] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:10.195 [2024-06-09 08:44:32.544825] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168675 ] 00:06:10.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.195 [2024-06-09 08:44:32.600976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.195 [2024-06-09 08:44:32.671449] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.195 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/bib 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:10.196 08:44:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.569 00:06:11.569 real 0m1.361s 00:06:11.569 user 0m1.262s 00:06:11.569 sys 0m0.113s 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.569 08:44:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:11.569 ************************************ 00:06:11.569 END TEST accel_decomp_full_mthread 00:06:11.569 ************************************ 00:06:11.569 08:44:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:11.569 08:44:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.569 08:44:33 accel -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:11.569 08:44:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:11.569 08:44:33 accel -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:11.569 08:44:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.569 08:44:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.569 08:44:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.569 08:44:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.569 08:44:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.569 08:44:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.569 08:44:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:11.569 08:44:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:11.569 ************************************ 00:06:11.569 START TEST accel_dif_functional_tests 00:06:11.569 ************************************ 00:06:11.569 08:44:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:11.569 [2024-06-09 08:44:33.986169] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:11.569 [2024-06-09 08:44:33.986203] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168926 ] 00:06:11.569 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.569 [2024-06-09 08:44:34.037813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.569 [2024-06-09 08:44:34.109528] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.569 [2024-06-09 08:44:34.109625] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.569 [2024-06-09 08:44:34.109626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.828 00:06:11.828 00:06:11.828 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.828 http://cunit.sourceforge.net/ 00:06:11.828 00:06:11.828 00:06:11.828 Suite: accel_dif 00:06:11.828 Test: verify: DIF generated, GUARD check ...passed 00:06:11.828 Test: verify: DIF generated, APPTAG check ...passed 00:06:11.828 Test: verify: DIF generated, REFTAG check ...passed 00:06:11.828 Test: verify: DIF not generated, GUARD check ...[2024-06-09 08:44:34.175956] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.828 passed 00:06:11.828 Test: verify: DIF not generated, APPTAG check ...[2024-06-09 08:44:34.176006] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.828 passed 00:06:11.828 Test: verify: DIF not generated, REFTAG check ...[2024-06-09 08:44:34.176042] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.828 passed 00:06:11.828 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:11.828 Test: verify: APPTAG incorrect, APPTAG check ...[2024-06-09 08:44:34.176086] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:11.828 passed 00:06:11.828 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:11.828 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:11.828 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:11.828 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-06-09 08:44:34.176181] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:11.828 passed 00:06:11.828 Test: verify copy: DIF generated, GUARD check ...passed 00:06:11.828 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:11.828 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:11.828 Test: verify copy: DIF not generated, GUARD check ...[2024-06-09 08:44:34.176281] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:11.828 passed 00:06:11.828 Test: verify copy: DIF not generated, APPTAG check ...[2024-06-09 08:44:34.176300] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:11.828 passed 00:06:11.828 Test: verify copy: DIF not generated, REFTAG check ...[2024-06-09 08:44:34.176318] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:11.828 passed 00:06:11.828 Test: generate copy: DIF generated, GUARD check ...passed 00:06:11.828 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:11.828 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:11.828 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:11.828 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:11.828 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:11.828 Test: generate copy: iovecs-len validate ...[2024-06-09 08:44:34.176482] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:11.828 passed 00:06:11.828 Test: generate copy: buffer alignment validate ...passed 00:06:11.828 00:06:11.828 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.828 suites 1 1 n/a 0 0 00:06:11.828 tests 26 26 26 0 0 00:06:11.828 asserts 115 115 115 0 n/a 00:06:11.828 00:06:11.828 Elapsed time = 0.000 seconds 00:06:11.828 00:06:11.828 real 0m0.396s 00:06:11.828 user 0m0.613s 00:06:11.828 sys 0m0.131s 00:06:11.828 08:44:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.828 08:44:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:11.828 ************************************ 00:06:11.828 END TEST accel_dif_functional_tests 00:06:11.828 ************************************ 00:06:11.828 00:06:11.828 real 0m30.671s 00:06:11.828 user 0m34.545s 00:06:11.828 sys 0m4.067s 00:06:11.828 08:44:34 accel -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:11.828 08:44:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.828 ************************************ 00:06:11.828 END TEST accel 00:06:11.828 ************************************ 00:06:12.086 08:44:34 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.086 08:44:34 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.086 08:44:34 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.087 08:44:34 -- common/autotest_common.sh@10 -- # set +x 00:06:12.087 ************************************ 00:06:12.087 START TEST accel_rpc 00:06:12.087 ************************************ 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:12.087 * Looking for test storage... 00:06:12.087 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/accel 00:06:12.087 08:44:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.087 08:44:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1169203 00:06:12.087 08:44:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1169203 00:06:12.087 08:44:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@830 -- # '[' -z 1169203 ']' 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:12.087 08:44:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.087 [2024-06-09 08:44:34.567744] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:12.087 [2024-06-09 08:44:34.567792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169203 ] 00:06:12.087 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.087 [2024-06-09 08:44:34.621965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.345 [2024-06-09 08:44:34.695012] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.911 08:44:35 accel_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:12.911 08:44:35 accel_rpc -- common/autotest_common.sh@863 -- # return 0 00:06:12.911 08:44:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:12.911 08:44:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:12.911 08:44:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:12.911 08:44:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:12.911 08:44:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:12.911 08:44:35 accel_rpc -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:12.911 08:44:35 accel_rpc -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:12.911 08:44:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.911 ************************************ 00:06:12.911 START TEST accel_assign_opcode 00:06:12.911 ************************************ 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # accel_assign_opcode_test_suite 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.911 [2024-06-09 08:44:35.389077] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:12.911 [2024-06-09 08:44:35.401098] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:12.911 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:13.169 software 00:06:13.169 00:06:13.169 real 0m0.245s 00:06:13.169 user 0m0.045s 00:06:13.169 sys 0m0.011s 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.169 08:44:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:13.169 ************************************ 00:06:13.169 END TEST accel_assign_opcode 00:06:13.169 ************************************ 00:06:13.169 08:44:35 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1169203 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@949 -- # '[' -z 1169203 ']' 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@953 -- # kill -0 1169203 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@954 -- # uname 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1169203 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1169203' 00:06:13.169 killing process with pid 1169203 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@968 -- # kill 1169203 00:06:13.169 08:44:35 accel_rpc -- common/autotest_common.sh@973 -- # wait 1169203 00:06:13.736 00:06:13.736 real 0m1.563s 00:06:13.736 user 0m1.623s 00:06:13.736 sys 0m0.402s 00:06:13.736 08:44:36 accel_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:13.736 08:44:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.736 ************************************ 00:06:13.736 END TEST accel_rpc 00:06:13.736 ************************************ 00:06:13.737 08:44:36 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.737 08:44:36 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:13.737 08:44:36 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:13.737 08:44:36 -- common/autotest_common.sh@10 -- # set +x 00:06:13.737 ************************************ 00:06:13.737 START TEST app_cmdline 00:06:13.737 ************************************ 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/cmdline.sh 00:06:13.737 * Looking for test storage... 00:06:13.737 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:13.737 08:44:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:13.737 08:44:36 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:13.737 08:44:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1169509 00:06:13.737 08:44:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1169509 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@830 -- # '[' -z 1169509 ']' 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:13.737 08:44:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.737 [2024-06-09 08:44:36.193416] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:13.737 [2024-06-09 08:44:36.193458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169509 ] 00:06:13.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.737 [2024-06-09 08:44:36.248469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.995 [2024-06-09 08:44:36.326980] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.561 08:44:36 app_cmdline -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:14.561 08:44:36 app_cmdline -- common/autotest_common.sh@863 -- # return 0 00:06:14.561 08:44:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:14.819 { 00:06:14.819 "version": "SPDK v24.09-pre git sha1 e55c9a812", 00:06:14.819 "fields": { 00:06:14.819 "major": 24, 00:06:14.819 "minor": 9, 00:06:14.819 "patch": 0, 00:06:14.819 "suffix": "-pre", 00:06:14.819 "commit": "e55c9a812" 00:06:14.819 } 00:06:14.819 } 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:14.819 request: 00:06:14.819 { 00:06:14.819 "method": "env_dpdk_get_mem_stats", 00:06:14.819 "req_id": 1 00:06:14.819 } 00:06:14.819 Got JSON-RPC error response 00:06:14.819 response: 00:06:14.819 { 00:06:14.819 "code": -32601, 00:06:14.819 "message": "Method not found" 00:06:14.819 } 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:14.819 08:44:37 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1169509 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@949 -- # '[' -z 1169509 ']' 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@953 -- # kill -0 1169509 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@954 -- # uname 00:06:14.819 08:44:37 app_cmdline -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1169509 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1169509' 00:06:15.078 killing process with pid 1169509 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@968 -- # kill 1169509 00:06:15.078 08:44:37 app_cmdline -- common/autotest_common.sh@973 -- # wait 1169509 00:06:15.336 00:06:15.336 real 0m1.663s 00:06:15.336 user 0m1.980s 00:06:15.336 sys 0m0.398s 00:06:15.336 08:44:37 app_cmdline -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.336 08:44:37 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.336 ************************************ 00:06:15.336 END TEST app_cmdline 00:06:15.336 ************************************ 00:06:15.336 08:44:37 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:15.336 08:44:37 -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:06:15.336 08:44:37 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.337 08:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.337 ************************************ 00:06:15.337 START TEST version 00:06:15.337 ************************************ 00:06:15.337 08:44:37 version -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/version.sh 00:06:15.337 * Looking for test storage... 00:06:15.337 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:15.337 08:44:37 version -- app/version.sh@17 -- # get_header_version major 00:06:15.594 08:44:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:15.594 08:44:37 version -- app/version.sh@14 -- # cut -f2 00:06:15.594 08:44:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.594 08:44:37 version -- app/version.sh@17 -- # major=24 00:06:15.594 08:44:37 version -- app/version.sh@18 -- # get_header_version minor 00:06:15.594 08:44:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:15.594 08:44:37 version -- app/version.sh@14 -- # cut -f2 00:06:15.594 08:44:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.594 08:44:37 version -- app/version.sh@18 -- # minor=9 00:06:15.594 08:44:37 version -- app/version.sh@19 -- # get_header_version patch 00:06:15.595 08:44:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:15.595 08:44:37 version -- app/version.sh@14 -- # cut -f2 00:06:15.595 08:44:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.595 08:44:37 version -- app/version.sh@19 -- # patch=0 00:06:15.595 08:44:37 version -- app/version.sh@20 -- # get_header_version suffix 00:06:15.595 08:44:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/version.h 00:06:15.595 08:44:37 version -- app/version.sh@14 -- # tr -d '"' 00:06:15.595 08:44:37 version -- app/version.sh@14 -- # cut -f2 00:06:15.595 08:44:37 version -- app/version.sh@20 -- # suffix=-pre 00:06:15.595 08:44:37 version -- app/version.sh@22 -- # version=24.9 00:06:15.595 08:44:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:15.595 08:44:37 version -- app/version.sh@28 -- # version=24.9rc0 00:06:15.595 08:44:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:06:15.595 08:44:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:15.595 08:44:37 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:15.595 08:44:37 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:15.595 00:06:15.595 real 0m0.154s 00:06:15.595 user 0m0.087s 00:06:15.595 sys 0m0.101s 00:06:15.595 08:44:37 version -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:15.595 08:44:37 version -- common/autotest_common.sh@10 -- # set +x 00:06:15.595 ************************************ 00:06:15.595 END TEST version 00:06:15.595 ************************************ 00:06:15.595 08:44:37 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:15.595 08:44:37 -- spdk/autotest.sh@198 -- # uname -s 00:06:15.595 08:44:37 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:15.595 08:44:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.595 08:44:37 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:15.595 08:44:37 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:15.595 08:44:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:15.595 08:44:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:15.595 08:44:37 -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:15.595 08:44:37 -- common/autotest_common.sh@10 -- # set +x 00:06:15.595 08:44:38 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:15.595 08:44:38 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:15.595 08:44:38 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:15.595 08:44:38 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:15.595 08:44:38 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:06:15.595 08:44:38 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:15.595 08:44:38 -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:15.595 08:44:38 -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.595 08:44:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.595 ************************************ 00:06:15.595 START TEST nvmf_rdma 00:06:15.595 ************************************ 00:06:15.595 08:44:38 nvmf_rdma -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:06:15.595 * Looking for test storage... 00:06:15.595 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.595 08:44:38 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:15.854 08:44:38 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.854 08:44:38 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.854 08:44:38 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.854 08:44:38 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.854 08:44:38 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.854 08:44:38 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.854 08:44:38 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:06:15.854 08:44:38 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:15.854 08:44:38 nvmf_rdma -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:15.854 08:44:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:15.854 08:44:38 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:15.854 08:44:38 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:15.854 08:44:38 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:15.854 08:44:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:15.854 ************************************ 00:06:15.854 START TEST nvmf_example 00:06:15.854 ************************************ 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:06:15.854 * Looking for test storage... 00:06:15.854 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.854 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:15.855 08:44:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:22.421 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:22.421 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@377 -- # modinfo irdma 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:22.421 Found net devices under 0000:af:00.0: cvl_0_0 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:22.421 Found net devices under 0000:af:00.1: cvl_0_1 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:22.421 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:06:22.422 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:22.422 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:06:22.422 altname enp175s0f0np0 00:06:22.422 altname ens801f0np0 00:06:22.422 inet 192.168.100.8/24 scope global cvl_0_0 00:06:22.422 valid_lft forever preferred_lft forever 00:06:22.422 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:06:22.422 valid_lft forever preferred_lft forever 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:06:22.422 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:22.422 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:06:22.422 altname enp175s0f1np1 00:06:22.422 altname ens801f1np1 00:06:22.422 inet 192.168.100.9/24 scope global cvl_0_1 00:06:22.422 valid_lft forever preferred_lft forever 00:06:22.422 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:06:22.422 valid_lft forever preferred_lft forever 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:22.422 08:44:43 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:22.422 192.168.100.9' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:22.422 192.168.100.9' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:22.422 192.168.100.9' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1173048 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1173048 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # '[' -z 1173048 ']' 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@863 -- # return 0 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.422 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:22.423 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.423 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.682 08:44:44 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:22.682 08:44:45 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:22.682 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.904 Initializing NVMe Controllers 00:06:34.904 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.904 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:34.904 Initialization complete. Launching workers. 00:06:34.904 ======================================================== 00:06:34.904 Latency(us) 00:06:34.904 Device Information : IOPS MiB/s Average min max 00:06:34.904 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 24706.30 96.51 2591.61 520.88 14967.57 00:06:34.904 ======================================================== 00:06:34.904 Total : 24706.30 96.51 2591.61 520.88 14967.57 00:06:34.904 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:34.904 rmmod nvme_rdma 00:06:34.904 rmmod nvme_fabrics 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1173048 ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1173048 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@949 -- # '[' -z 1173048 ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@953 -- # kill -0 1173048 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # uname 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1173048 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@955 -- # process_name=nvmf 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@959 -- # '[' nvmf = sudo ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1173048' 00:06:34.904 killing process with pid 1173048 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@968 -- # kill 1173048 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@973 -- # wait 1173048 00:06:34.904 nvmf threads initialize successfully 00:06:34.904 bdev subsystem init successfully 00:06:34.904 created a nvmf target service 00:06:34.904 create targets's poll groups done 00:06:34.904 all subsystems of target started 00:06:34.904 nvmf target is running 00:06:34.904 all subsystems of target stopped 00:06:34.904 destroy targets's poll groups done 00:06:34.904 destroyed the nvmf target service 00:06:34.904 bdev subsystem finish successfully 00:06:34.904 nvmf threads destroy successfully 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.904 00:06:34.904 real 0m18.351s 00:06:34.904 user 0m51.122s 00:06:34.904 sys 0m4.636s 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:34.904 08:44:56 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:34.904 ************************************ 00:06:34.904 END TEST nvmf_example 00:06:34.904 ************************************ 00:06:34.904 08:44:56 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:34.904 08:44:56 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:34.904 08:44:56 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:34.904 08:44:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:34.904 ************************************ 00:06:34.904 START TEST nvmf_filesystem 00:06:34.904 ************************************ 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:06:34.904 * Looking for test storage... 00:06:34.904 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:06:34.904 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:06:34.905 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:34.905 #define SPDK_CONFIG_H 00:06:34.905 #define SPDK_CONFIG_APPS 1 00:06:34.905 #define SPDK_CONFIG_ARCH native 00:06:34.905 #undef SPDK_CONFIG_ASAN 00:06:34.905 #undef SPDK_CONFIG_AVAHI 00:06:34.905 #undef SPDK_CONFIG_CET 00:06:34.905 #define SPDK_CONFIG_COVERAGE 1 00:06:34.905 #define SPDK_CONFIG_CROSS_PREFIX 00:06:34.905 #undef SPDK_CONFIG_CRYPTO 00:06:34.905 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:34.905 #undef SPDK_CONFIG_CUSTOMOCF 00:06:34.905 #undef SPDK_CONFIG_DAOS 00:06:34.905 #define SPDK_CONFIG_DAOS_DIR 00:06:34.905 #define SPDK_CONFIG_DEBUG 1 00:06:34.905 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:34.905 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:06:34.905 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:34.905 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:34.905 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:34.905 #undef SPDK_CONFIG_DPDK_UADK 00:06:34.905 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:06:34.905 #define SPDK_CONFIG_EXAMPLES 1 00:06:34.905 #undef SPDK_CONFIG_FC 00:06:34.905 #define SPDK_CONFIG_FC_PATH 00:06:34.905 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:34.905 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:34.905 #undef SPDK_CONFIG_FUSE 00:06:34.905 #undef SPDK_CONFIG_FUZZER 00:06:34.905 #define SPDK_CONFIG_FUZZER_LIB 00:06:34.905 #undef SPDK_CONFIG_GOLANG 00:06:34.906 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:34.906 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:34.906 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:34.906 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:06:34.906 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:34.906 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:34.906 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:34.906 #define SPDK_CONFIG_IDXD 1 00:06:34.906 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:34.906 #undef SPDK_CONFIG_IPSEC_MB 00:06:34.906 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:34.906 #define SPDK_CONFIG_ISAL 1 00:06:34.906 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:34.906 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:34.906 #define SPDK_CONFIG_LIBDIR 00:06:34.906 #undef SPDK_CONFIG_LTO 00:06:34.906 #define SPDK_CONFIG_MAX_LCORES 00:06:34.906 #define SPDK_CONFIG_NVME_CUSE 1 00:06:34.906 #undef SPDK_CONFIG_OCF 00:06:34.906 #define SPDK_CONFIG_OCF_PATH 00:06:34.906 #define SPDK_CONFIG_OPENSSL_PATH 00:06:34.906 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:34.906 #define SPDK_CONFIG_PGO_DIR 00:06:34.906 #undef SPDK_CONFIG_PGO_USE 00:06:34.906 #define SPDK_CONFIG_PREFIX /usr/local 00:06:34.906 #undef SPDK_CONFIG_RAID5F 00:06:34.906 #undef SPDK_CONFIG_RBD 00:06:34.906 #define SPDK_CONFIG_RDMA 1 00:06:34.906 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:34.906 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:34.906 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:34.906 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:34.906 #define SPDK_CONFIG_SHARED 1 00:06:34.906 #undef SPDK_CONFIG_SMA 00:06:34.906 #define SPDK_CONFIG_TESTS 1 00:06:34.906 #undef SPDK_CONFIG_TSAN 00:06:34.906 #define SPDK_CONFIG_UBLK 1 00:06:34.906 #define SPDK_CONFIG_UBSAN 1 00:06:34.906 #undef SPDK_CONFIG_UNIT_TESTS 00:06:34.906 #undef SPDK_CONFIG_URING 00:06:34.906 #define SPDK_CONFIG_URING_PATH 00:06:34.906 #undef SPDK_CONFIG_URING_ZNS 00:06:34.906 #undef SPDK_CONFIG_USDT 00:06:34.906 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:34.906 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:34.906 #undef SPDK_CONFIG_VFIO_USER 00:06:34.906 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:34.906 #define SPDK_CONFIG_VHOST 1 00:06:34.906 #define SPDK_CONFIG_VIRTIO 1 00:06:34.906 #undef SPDK_CONFIG_VTUNE 00:06:34.906 #define SPDK_CONFIG_VTUNE_DIR 00:06:34.906 #define SPDK_CONFIG_WERROR 1 00:06:34.906 #define SPDK_CONFIG_WPDK_DIR 00:06:34.906 #undef SPDK_CONFIG_XNVME 00:06:34.906 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:06:34.906 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:06:34.907 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1175249 ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1175249 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7c3b2e 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7c3b2e/tests/target /tmp/spdk.7c3b2e 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=900243456 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4384186368 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=90021523456 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=95562735616 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5541212160 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47777992704 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781367808 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19103158272 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19112550400 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9392128 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=47780921344 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781367808 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=446464 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=9556267008 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=9556271104 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:06:34.908 * Looking for test storage... 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:06:34.908 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=90021523456 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7755804672 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.910 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set -o errtrace 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1686 -- # true 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1688 -- # xtrace_fd 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.910 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:34.911 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.911 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:34.911 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:34.911 08:44:56 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:34.911 08:44:56 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:39.147 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:39.147 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@377 -- # modinfo irdma 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:39.147 Found net devices under 0000:af:00.0: cvl_0_0 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:39.147 Found net devices under 0000:af:00.1: cvl_0_1 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:39.147 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:06:39.148 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:39.148 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:06:39.148 altname enp175s0f0np0 00:06:39.148 altname ens801f0np0 00:06:39.148 inet 192.168.100.8/24 scope global cvl_0_0 00:06:39.148 valid_lft forever preferred_lft forever 00:06:39.148 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:06:39.148 valid_lft forever preferred_lft forever 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:06:39.148 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:39.148 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:06:39.148 altname enp175s0f1np1 00:06:39.148 altname ens801f1np1 00:06:39.148 inet 192.168.100.9/24 scope global cvl_0_1 00:06:39.148 valid_lft forever preferred_lft forever 00:06:39.148 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:06:39.148 valid_lft forever preferred_lft forever 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:39.148 192.168.100.9' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:39.148 192.168.100.9' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:39.148 192.168.100.9' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:39.148 08:45:01 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 ************************************ 00:06:39.426 START TEST nvmf_filesystem_no_in_capsule 00:06:39.426 ************************************ 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 0 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1178177 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1178177 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1178177 ']' 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:39.426 08:45:01 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:39.426 [2024-06-09 08:45:01.767882] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:39.426 [2024-06-09 08:45:01.767928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.426 [2024-06-09 08:45:01.823412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.426 [2024-06-09 08:45:01.903318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.426 [2024-06-09 08:45:01.903355] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.426 [2024-06-09 08:45:01.903362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.426 [2024-06-09 08:45:01.903368] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.426 [2024-06-09 08:45:01.903373] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.426 [2024-06-09 08:45:01.903424] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.426 [2024-06-09 08:45:01.903527] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.426 [2024-06-09 08:45:01.903603] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.426 [2024-06-09 08:45:01.903604] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 [2024-06-09 08:45:02.627774] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:40.364 [2024-06-09 08:45:02.641078] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x7128f0/0x711f30) succeed. 00:06:40.364 [2024-06-09 08:45:02.649988] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x713ca0/0x7124b0) succeed. 00:06:40.364 [2024-06-09 08:45:02.650010] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 [2024-06-09 08:45:02.808039] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:06:40.364 { 00:06:40.364 "name": "Malloc1", 00:06:40.364 "aliases": [ 00:06:40.364 "30ca6345-b254-4929-81f6-d9d6ff4db903" 00:06:40.364 ], 00:06:40.364 "product_name": "Malloc disk", 00:06:40.364 "block_size": 512, 00:06:40.364 "num_blocks": 1048576, 00:06:40.364 "uuid": "30ca6345-b254-4929-81f6-d9d6ff4db903", 00:06:40.364 "assigned_rate_limits": { 00:06:40.364 "rw_ios_per_sec": 0, 00:06:40.364 "rw_mbytes_per_sec": 0, 00:06:40.364 "r_mbytes_per_sec": 0, 00:06:40.364 "w_mbytes_per_sec": 0 00:06:40.364 }, 00:06:40.364 "claimed": true, 00:06:40.364 "claim_type": "exclusive_write", 00:06:40.364 "zoned": false, 00:06:40.364 "supported_io_types": { 00:06:40.364 "read": true, 00:06:40.364 "write": true, 00:06:40.364 "unmap": true, 00:06:40.364 "write_zeroes": true, 00:06:40.364 "flush": true, 00:06:40.364 "reset": true, 00:06:40.364 "compare": false, 00:06:40.364 "compare_and_write": false, 00:06:40.364 "abort": true, 00:06:40.364 "nvme_admin": false, 00:06:40.364 "nvme_io": false 00:06:40.364 }, 00:06:40.364 "memory_domains": [ 00:06:40.364 { 00:06:40.364 "dma_device_id": "system", 00:06:40.364 "dma_device_type": 1 00:06:40.364 }, 00:06:40.364 { 00:06:40.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:40.364 "dma_device_type": 2 00:06:40.364 } 00:06:40.364 ], 00:06:40.364 "driver_specific": {} 00:06:40.364 } 00:06:40.364 ]' 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:40.364 08:45:02 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:40.623 08:45:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:40.623 08:45:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:06:40.623 08:45:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:06:40.623 08:45:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:06:40.623 08:45:03 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:43.156 08:45:05 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 START TEST filesystem_ext4 00:06:44.093 ************************************ 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local force 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:44.093 mke2fs 1.46.5 (30-Dec-2021) 00:06:44.093 Discarding device blocks: 0/522240 done 00:06:44.093 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:44.093 Filesystem UUID: 21b0add8-7033-4e10-9104-c51843d155c1 00:06:44.093 Superblock backups stored on blocks: 00:06:44.093 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:44.093 00:06:44.093 Allocating group tables: 0/64 done 00:06:44.093 Writing inode tables: 0/64 done 00:06:44.093 Creating journal (8192 blocks): done 00:06:44.093 Writing superblocks and filesystem accounting information: 0/64 done 00:06:44.093 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@944 -- # return 0 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1178177 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.093 00:06:44.093 real 0m0.176s 00:06:44.093 user 0m0.025s 00:06:44.093 sys 0m0.062s 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 END TEST filesystem_ext4 00:06:44.093 ************************************ 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.093 ************************************ 00:06:44.093 START TEST filesystem_btrfs 00:06:44.093 ************************************ 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local force 00:06:44.093 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:06:44.094 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:06:44.094 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:44.353 btrfs-progs v6.6.2 00:06:44.353 See https://btrfs.readthedocs.io for more information. 00:06:44.353 00:06:44.353 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:44.353 NOTE: several default settings have changed in version 5.15, please make sure 00:06:44.353 this does not affect your deployments: 00:06:44.353 - DUP for metadata (-m dup) 00:06:44.353 - enabled no-holes (-O no-holes) 00:06:44.353 - enabled free-space-tree (-R free-space-tree) 00:06:44.353 00:06:44.353 Label: (null) 00:06:44.353 UUID: f6d06c0d-4e97-40a8-a689-33e8f2899ebe 00:06:44.353 Node size: 16384 00:06:44.353 Sector size: 4096 00:06:44.353 Filesystem size: 510.00MiB 00:06:44.353 Block group profiles: 00:06:44.353 Data: single 8.00MiB 00:06:44.353 Metadata: DUP 32.00MiB 00:06:44.353 System: DUP 8.00MiB 00:06:44.353 SSD detected: yes 00:06:44.353 Zoned device: no 00:06:44.353 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:44.353 Runtime features: free-space-tree 00:06:44.353 Checksum: crc32c 00:06:44.353 Number of devices: 1 00:06:44.353 Devices: 00:06:44.353 ID SIZE PATH 00:06:44.353 1 510.00MiB /dev/nvme0n1p1 00:06:44.353 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@944 -- # return 0 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1178177 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.353 00:06:44.353 real 0m0.243s 00:06:44.353 user 0m0.030s 00:06:44.353 sys 0m0.120s 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.353 ************************************ 00:06:44.353 END TEST filesystem_btrfs 00:06:44.353 ************************************ 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:44.353 ************************************ 00:06:44.353 START TEST filesystem_xfs 00:06:44.353 ************************************ 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local i=0 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local force 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # force=-f 00:06:44.353 08:45:06 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:44.612 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:44.612 = sectsz=512 attr=2, projid32bit=1 00:06:44.612 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:44.612 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:44.612 data = bsize=4096 blocks=130560, imaxpct=25 00:06:44.612 = sunit=0 swidth=0 blks 00:06:44.612 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:44.612 log =internal log bsize=4096 blocks=16384, version=2 00:06:44.612 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:44.612 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:44.612 Discarding blocks...Done. 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@944 -- # return 0 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:44.612 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1178177 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:44.613 00:06:44.613 real 0m0.190s 00:06:44.613 user 0m0.022s 00:06:44.613 sys 0m0.067s 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:44.613 ************************************ 00:06:44.613 END TEST filesystem_xfs 00:06:44.613 ************************************ 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:44.613 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:45.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.548 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:45.548 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:06:45.548 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:06:45.548 08:45:07 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.548 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:06:45.548 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.548 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:06:45.548 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1178177 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1178177 ']' 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1178177 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # uname 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1178177 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1178177' 00:06:45.549 killing process with pid 1178177 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # kill 1178177 00:06:45.549 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # wait 1178177 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:46.116 00:06:46.116 real 0m6.722s 00:06:46.116 user 0m26.261s 00:06:46.116 sys 0m1.029s 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.116 ************************************ 00:06:46.116 END TEST nvmf_filesystem_no_in_capsule 00:06:46.116 ************************************ 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.116 ************************************ 00:06:46.116 START TEST nvmf_filesystem_in_capsule 00:06:46.116 ************************************ 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # nvmf_filesystem_part 4096 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1179702 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1179702 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # '[' -z 1179702 ']' 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:46.116 08:45:08 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.116 [2024-06-09 08:45:08.555517] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:46.116 [2024-06-09 08:45:08.555553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.116 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.116 [2024-06-09 08:45:08.611015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.375 [2024-06-09 08:45:08.692538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.375 [2024-06-09 08:45:08.692573] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.375 [2024-06-09 08:45:08.692580] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.375 [2024-06-09 08:45:08.692586] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.375 [2024-06-09 08:45:08.692591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.375 [2024-06-09 08:45:08.692626] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.375 [2024-06-09 08:45:08.692721] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.375 [2024-06-09 08:45:08.692817] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.375 [2024-06-09 08:45:08.692819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@863 -- # return 0 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.942 [2024-06-09 08:45:09.420961] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x23698f0/0x2368f30) succeed. 00:06:46.942 [2024-06-09 08:45:09.429855] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x236aca0/0x23694b0) succeed. 00:06:46.942 [2024-06-09 08:45:09.429877] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:46.942 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 Malloc1 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 [2024-06-09 08:45:09.575622] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_name=Malloc1 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_info 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bs 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local nb 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_info='[ 00:06:47.202 { 00:06:47.202 "name": "Malloc1", 00:06:47.202 "aliases": [ 00:06:47.202 "a97fefb6-41f7-43d4-905d-7da9cbb41e76" 00:06:47.202 ], 00:06:47.202 "product_name": "Malloc disk", 00:06:47.202 "block_size": 512, 00:06:47.202 "num_blocks": 1048576, 00:06:47.202 "uuid": "a97fefb6-41f7-43d4-905d-7da9cbb41e76", 00:06:47.202 "assigned_rate_limits": { 00:06:47.202 "rw_ios_per_sec": 0, 00:06:47.202 "rw_mbytes_per_sec": 0, 00:06:47.202 "r_mbytes_per_sec": 0, 00:06:47.202 "w_mbytes_per_sec": 0 00:06:47.202 }, 00:06:47.202 "claimed": true, 00:06:47.202 "claim_type": "exclusive_write", 00:06:47.202 "zoned": false, 00:06:47.202 "supported_io_types": { 00:06:47.202 "read": true, 00:06:47.202 "write": true, 00:06:47.202 "unmap": true, 00:06:47.202 "write_zeroes": true, 00:06:47.202 "flush": true, 00:06:47.202 "reset": true, 00:06:47.202 "compare": false, 00:06:47.202 "compare_and_write": false, 00:06:47.202 "abort": true, 00:06:47.202 "nvme_admin": false, 00:06:47.202 "nvme_io": false 00:06:47.202 }, 00:06:47.202 "memory_domains": [ 00:06:47.202 { 00:06:47.202 "dma_device_id": "system", 00:06:47.202 "dma_device_type": 1 00:06:47.202 }, 00:06:47.202 { 00:06:47.202 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.202 "dma_device_type": 2 00:06:47.202 } 00:06:47.202 ], 00:06:47.202 "driver_specific": {} 00:06:47.202 } 00:06:47.202 ]' 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .block_size' 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bs=512 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .num_blocks' 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # nb=1048576 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_size=512 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # echo 512 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:47.202 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:06:47.462 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:47.462 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local i=0 00:06:47.462 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:06:47.462 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:06:47.462 08:45:09 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # sleep 2 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # return 0 00:06:49.365 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:49.624 08:45:11 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:49.624 08:45:12 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.559 ************************************ 00:06:50.559 START TEST filesystem_in_capsule_ext4 00:06:50.559 ************************************ 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local fstype=ext4 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local i=0 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local force 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # '[' ext4 = ext4 ']' 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # force=-F 00:06:50.559 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:50.559 mke2fs 1.46.5 (30-Dec-2021) 00:06:50.817 Discarding device blocks: 0/522240 done 00:06:50.817 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:50.817 Filesystem UUID: 123efc14-df88-4e6b-9990-0040a91aafa3 00:06:50.817 Superblock backups stored on blocks: 00:06:50.817 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:50.817 00:06:50.817 Allocating group tables: 0/64 done 00:06:50.817 Writing inode tables: 0/64 done 00:06:50.817 Creating journal (8192 blocks): done 00:06:50.817 Writing superblocks and filesystem accounting information: 0/64 done 00:06:50.817 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@944 -- # return 0 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1179702 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:50.817 00:06:50.817 real 0m0.175s 00:06:50.817 user 0m0.023s 00:06:50.817 sys 0m0.064s 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 ************************************ 00:06:50.817 END TEST filesystem_in_capsule_ext4 00:06:50.817 ************************************ 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.817 ************************************ 00:06:50.817 START TEST filesystem_in_capsule_btrfs 00:06:50.817 ************************************ 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local fstype=btrfs 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local i=0 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local force 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # '[' btrfs = ext4 ']' 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # force=-f 00:06:50.817 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:51.075 btrfs-progs v6.6.2 00:06:51.075 See https://btrfs.readthedocs.io for more information. 00:06:51.075 00:06:51.075 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:51.075 NOTE: several default settings have changed in version 5.15, please make sure 00:06:51.075 this does not affect your deployments: 00:06:51.075 - DUP for metadata (-m dup) 00:06:51.075 - enabled no-holes (-O no-holes) 00:06:51.075 - enabled free-space-tree (-R free-space-tree) 00:06:51.075 00:06:51.075 Label: (null) 00:06:51.075 UUID: b3685c68-0ee5-4536-b8d4-e6df829ac090 00:06:51.075 Node size: 16384 00:06:51.075 Sector size: 4096 00:06:51.075 Filesystem size: 510.00MiB 00:06:51.075 Block group profiles: 00:06:51.075 Data: single 8.00MiB 00:06:51.075 Metadata: DUP 32.00MiB 00:06:51.075 System: DUP 8.00MiB 00:06:51.075 SSD detected: yes 00:06:51.075 Zoned device: no 00:06:51.075 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:51.075 Runtime features: free-space-tree 00:06:51.075 Checksum: crc32c 00:06:51.075 Number of devices: 1 00:06:51.075 Devices: 00:06:51.075 ID SIZE PATH 00:06:51.075 1 510.00MiB /dev/nvme0n1p1 00:06:51.075 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@944 -- # return 0 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1179702 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.075 00:06:51.075 real 0m0.250s 00:06:51.075 user 0m0.027s 00:06:51.075 sys 0m0.120s 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.075 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:51.075 ************************************ 00:06:51.075 END TEST filesystem_in_capsule_btrfs 00:06:51.075 ************************************ 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1100 -- # '[' 4 -le 1 ']' 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.333 ************************************ 00:06:51.333 START TEST filesystem_in_capsule_xfs 00:06:51.333 ************************************ 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # nvmf_filesystem_create xfs nvme0n1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local fstype=xfs 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local dev_name=/dev/nvme0n1p1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local i=0 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local force 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # '[' xfs = ext4 ']' 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # force=-f 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:51.333 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:51.333 = sectsz=512 attr=2, projid32bit=1 00:06:51.333 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:51.333 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:51.333 data = bsize=4096 blocks=130560, imaxpct=25 00:06:51.333 = sunit=0 swidth=0 blks 00:06:51.333 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:51.333 log =internal log bsize=4096 blocks=16384, version=2 00:06:51.333 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:51.333 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:51.333 Discarding blocks...Done. 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@944 -- # return 0 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1179702 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.333 00:06:51.333 real 0m0.187s 00:06:51.333 user 0m0.019s 00:06:51.333 sys 0m0.062s 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:51.333 ************************************ 00:06:51.333 END TEST filesystem_in_capsule_xfs 00:06:51.333 ************************************ 00:06:51.333 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:51.591 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:51.591 08:45:13 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:52.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # local i=0 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1230 -- # return 0 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1179702 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@949 -- # '[' -z 1179702 ']' 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # kill -0 1179702 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # uname 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1179702 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1179702' 00:06:52.526 killing process with pid 1179702 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # kill 1179702 00:06:52.526 08:45:14 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # wait 1179702 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:52.785 00:06:52.785 real 0m6.706s 00:06:52.785 user 0m26.199s 00:06:52.785 sys 0m1.031s 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:52.785 ************************************ 00:06:52.785 END TEST nvmf_filesystem_in_capsule 00:06:52.785 ************************************ 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:52.785 rmmod nvme_rdma 00:06:52.785 rmmod nvme_fabrics 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:52.785 00:06:52.785 real 0m18.664s 00:06:52.785 user 0m53.967s 00:06:52.785 sys 0m5.840s 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:52.785 08:45:15 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:52.785 ************************************ 00:06:52.785 END TEST nvmf_filesystem 00:06:52.785 ************************************ 00:06:52.785 08:45:15 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:52.785 08:45:15 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:52.785 08:45:15 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:52.785 08:45:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:53.044 ************************************ 00:06:53.044 START TEST nvmf_target_discovery 00:06:53.044 ************************************ 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:06:53.044 * Looking for test storage... 00:06:53.044 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.044 08:45:15 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.045 08:45:15 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:58.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:58.313 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:58.314 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@377 -- # modinfo irdma 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:58.314 Found net devices under 0000:af:00.0: cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:58.314 Found net devices under 0000:af:00.1: cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:06:58.314 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:58.314 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:06:58.314 altname enp175s0f0np0 00:06:58.314 altname ens801f0np0 00:06:58.314 inet 192.168.100.8/24 scope global cvl_0_0 00:06:58.314 valid_lft forever preferred_lft forever 00:06:58.314 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:06:58.314 valid_lft forever preferred_lft forever 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:06:58.314 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:06:58.314 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:06:58.314 altname enp175s0f1np1 00:06:58.314 altname ens801f1np1 00:06:58.314 inet 192.168.100.9/24 scope global cvl_0_1 00:06:58.314 valid_lft forever preferred_lft forever 00:06:58.314 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:06:58.314 valid_lft forever preferred_lft forever 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_0 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo cvl_0_1 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:58.314 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:06:58.315 192.168.100.9' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:06:58.315 192.168.100.9' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:06:58.315 192.168.100.9' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@723 -- # xtrace_disable 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1184290 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1184290 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # '[' -z 1184290 ']' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local max_retries=100 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@839 -- # xtrace_disable 00:06:58.315 08:45:20 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.315 [2024-06-09 08:45:20.407951] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:06:58.315 [2024-06-09 08:45:20.407995] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.315 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.315 [2024-06-09 08:45:20.463246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.315 [2024-06-09 08:45:20.541822] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.315 [2024-06-09 08:45:20.541857] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.315 [2024-06-09 08:45:20.541863] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.315 [2024-06-09 08:45:20.541869] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.315 [2024-06-09 08:45:20.541874] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.315 [2024-06-09 08:45:20.541931] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.315 [2024-06-09 08:45:20.542045] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.315 [2024-06-09 08:45:20.542129] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.315 [2024-06-09 08:45:20.542130] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@863 -- # return 0 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@729 -- # xtrace_disable 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.883 [2024-06-09 08:45:21.276754] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xa3c8f0/0xa3bf30) succeed. 00:06:58.883 [2024-06-09 08:45:21.285664] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xa3dca0/0xa3c4b0) succeed. 00:06:58.883 [2024-06-09 08:45:21.285685] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.883 Null1 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.883 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 [2024-06-09 08:45:21.334064] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 Null2 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 Null3 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 Null4 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:58.884 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.143 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:06:59.143 00:06:59.143 Discovery Log Number of Records 6, Generation counter 6 00:06:59.143 =====Discovery Log Entry 0====== 00:06:59.143 trtype: rdma 00:06:59.143 adrfam: ipv4 00:06:59.143 subtype: current discovery subsystem 00:06:59.143 treq: not required 00:06:59.143 portid: 0 00:06:59.143 trsvcid: 4420 00:06:59.143 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:59.143 traddr: 192.168.100.8 00:06:59.143 eflags: explicit discovery connections, duplicate discovery information 00:06:59.143 rdma_prtype: not specified 00:06:59.143 rdma_qptype: connected 00:06:59.143 rdma_cms: rdma-cm 00:06:59.143 rdma_pkey: 0x0000 00:06:59.143 =====Discovery Log Entry 1====== 00:06:59.143 trtype: rdma 00:06:59.144 adrfam: ipv4 00:06:59.144 subtype: nvme subsystem 00:06:59.144 treq: not required 00:06:59.144 portid: 0 00:06:59.144 trsvcid: 4420 00:06:59.144 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:59.144 traddr: 192.168.100.8 00:06:59.144 eflags: none 00:06:59.144 rdma_prtype: not specified 00:06:59.144 rdma_qptype: connected 00:06:59.144 rdma_cms: rdma-cm 00:06:59.144 rdma_pkey: 0x0000 00:06:59.144 =====Discovery Log Entry 2====== 00:06:59.144 trtype: rdma 00:06:59.144 adrfam: ipv4 00:06:59.144 subtype: nvme subsystem 00:06:59.144 treq: not required 00:06:59.144 portid: 0 00:06:59.144 trsvcid: 4420 00:06:59.144 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:59.144 traddr: 192.168.100.8 00:06:59.144 eflags: none 00:06:59.144 rdma_prtype: not specified 00:06:59.144 rdma_qptype: connected 00:06:59.144 rdma_cms: rdma-cm 00:06:59.144 rdma_pkey: 0x0000 00:06:59.144 =====Discovery Log Entry 3====== 00:06:59.144 trtype: rdma 00:06:59.144 adrfam: ipv4 00:06:59.144 subtype: nvme subsystem 00:06:59.144 treq: not required 00:06:59.144 portid: 0 00:06:59.144 trsvcid: 4420 00:06:59.144 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:59.144 traddr: 192.168.100.8 00:06:59.144 eflags: none 00:06:59.144 rdma_prtype: not specified 00:06:59.144 rdma_qptype: connected 00:06:59.144 rdma_cms: rdma-cm 00:06:59.144 rdma_pkey: 0x0000 00:06:59.144 =====Discovery Log Entry 4====== 00:06:59.144 trtype: rdma 00:06:59.144 adrfam: ipv4 00:06:59.144 subtype: nvme subsystem 00:06:59.144 treq: not required 00:06:59.144 portid: 0 00:06:59.144 trsvcid: 4420 00:06:59.144 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:59.144 traddr: 192.168.100.8 00:06:59.144 eflags: none 00:06:59.144 rdma_prtype: not specified 00:06:59.144 rdma_qptype: connected 00:06:59.144 rdma_cms: rdma-cm 00:06:59.144 rdma_pkey: 0x0000 00:06:59.144 =====Discovery Log Entry 5====== 00:06:59.144 trtype: rdma 00:06:59.144 adrfam: ipv4 00:06:59.144 subtype: discovery subsystem referral 00:06:59.144 treq: not required 00:06:59.144 portid: 0 00:06:59.144 trsvcid: 4430 00:06:59.144 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:59.144 traddr: 192.168.100.8 00:06:59.144 eflags: none 00:06:59.144 rdma_prtype: unrecognized 00:06:59.144 rdma_qptype: unrecognized 00:06:59.144 rdma_cms: unrecognized 00:06:59.144 rdma_pkey: 0x0000 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:59.144 Perform nvmf subsystem discovery via RPC 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 [ 00:06:59.144 { 00:06:59.144 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:59.144 "subtype": "Discovery", 00:06:59.144 "listen_addresses": [ 00:06:59.144 { 00:06:59.144 "trtype": "RDMA", 00:06:59.144 "adrfam": "IPv4", 00:06:59.144 "traddr": "192.168.100.8", 00:06:59.144 "trsvcid": "4420" 00:06:59.144 } 00:06:59.144 ], 00:06:59.144 "allow_any_host": true, 00:06:59.144 "hosts": [] 00:06:59.144 }, 00:06:59.144 { 00:06:59.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:59.144 "subtype": "NVMe", 00:06:59.144 "listen_addresses": [ 00:06:59.144 { 00:06:59.144 "trtype": "RDMA", 00:06:59.144 "adrfam": "IPv4", 00:06:59.144 "traddr": "192.168.100.8", 00:06:59.144 "trsvcid": "4420" 00:06:59.144 } 00:06:59.144 ], 00:06:59.144 "allow_any_host": true, 00:06:59.144 "hosts": [], 00:06:59.144 "serial_number": "SPDK00000000000001", 00:06:59.144 "model_number": "SPDK bdev Controller", 00:06:59.144 "max_namespaces": 32, 00:06:59.144 "min_cntlid": 1, 00:06:59.144 "max_cntlid": 65519, 00:06:59.144 "namespaces": [ 00:06:59.144 { 00:06:59.144 "nsid": 1, 00:06:59.144 "bdev_name": "Null1", 00:06:59.144 "name": "Null1", 00:06:59.144 "nguid": "818649FFF6BB4EE7BFC9F0651D040D27", 00:06:59.144 "uuid": "818649ff-f6bb-4ee7-bfc9-f0651d040d27" 00:06:59.144 } 00:06:59.144 ] 00:06:59.144 }, 00:06:59.144 { 00:06:59.144 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:59.144 "subtype": "NVMe", 00:06:59.144 "listen_addresses": [ 00:06:59.144 { 00:06:59.144 "trtype": "RDMA", 00:06:59.144 "adrfam": "IPv4", 00:06:59.144 "traddr": "192.168.100.8", 00:06:59.144 "trsvcid": "4420" 00:06:59.144 } 00:06:59.144 ], 00:06:59.144 "allow_any_host": true, 00:06:59.144 "hosts": [], 00:06:59.144 "serial_number": "SPDK00000000000002", 00:06:59.144 "model_number": "SPDK bdev Controller", 00:06:59.144 "max_namespaces": 32, 00:06:59.144 "min_cntlid": 1, 00:06:59.144 "max_cntlid": 65519, 00:06:59.144 "namespaces": [ 00:06:59.144 { 00:06:59.144 "nsid": 1, 00:06:59.144 "bdev_name": "Null2", 00:06:59.144 "name": "Null2", 00:06:59.144 "nguid": "45D2B0AF11D84DBA8ABAD490B3C5C3A3", 00:06:59.144 "uuid": "45d2b0af-11d8-4dba-8aba-d490b3c5c3a3" 00:06:59.144 } 00:06:59.144 ] 00:06:59.144 }, 00:06:59.144 { 00:06:59.144 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:59.144 "subtype": "NVMe", 00:06:59.144 "listen_addresses": [ 00:06:59.144 { 00:06:59.144 "trtype": "RDMA", 00:06:59.144 "adrfam": "IPv4", 00:06:59.144 "traddr": "192.168.100.8", 00:06:59.144 "trsvcid": "4420" 00:06:59.144 } 00:06:59.144 ], 00:06:59.144 "allow_any_host": true, 00:06:59.144 "hosts": [], 00:06:59.144 "serial_number": "SPDK00000000000003", 00:06:59.144 "model_number": "SPDK bdev Controller", 00:06:59.144 "max_namespaces": 32, 00:06:59.144 "min_cntlid": 1, 00:06:59.144 "max_cntlid": 65519, 00:06:59.144 "namespaces": [ 00:06:59.144 { 00:06:59.144 "nsid": 1, 00:06:59.144 "bdev_name": "Null3", 00:06:59.144 "name": "Null3", 00:06:59.144 "nguid": "818782B9318746B992704884819AA910", 00:06:59.144 "uuid": "818782b9-3187-46b9-9270-4884819aa910" 00:06:59.144 } 00:06:59.144 ] 00:06:59.144 }, 00:06:59.144 { 00:06:59.144 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:59.144 "subtype": "NVMe", 00:06:59.144 "listen_addresses": [ 00:06:59.144 { 00:06:59.144 "trtype": "RDMA", 00:06:59.144 "adrfam": "IPv4", 00:06:59.144 "traddr": "192.168.100.8", 00:06:59.144 "trsvcid": "4420" 00:06:59.144 } 00:06:59.144 ], 00:06:59.144 "allow_any_host": true, 00:06:59.144 "hosts": [], 00:06:59.144 "serial_number": "SPDK00000000000004", 00:06:59.144 "model_number": "SPDK bdev Controller", 00:06:59.144 "max_namespaces": 32, 00:06:59.144 "min_cntlid": 1, 00:06:59.144 "max_cntlid": 65519, 00:06:59.144 "namespaces": [ 00:06:59.144 { 00:06:59.144 "nsid": 1, 00:06:59.144 "bdev_name": "Null4", 00:06:59.144 "name": "Null4", 00:06:59.144 "nguid": "586240035A7A4D21B09828483ECCD299", 00:06:59.144 "uuid": "58624003-5a7a-4d21-b098-28483eccd299" 00:06:59.144 } 00:06:59.144 ] 00:06:59.144 } 00:06:59.144 ] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.144 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.145 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:06:59.404 rmmod nvme_rdma 00:06:59.404 rmmod nvme_fabrics 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1184290 ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1184290 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@949 -- # '[' -z 1184290 ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@953 -- # kill -0 1184290 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # uname 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1184290 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1184290' 00:06:59.404 killing process with pid 1184290 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@968 -- # kill 1184290 00:06:59.404 08:45:21 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@973 -- # wait 1184290 00:06:59.663 08:45:22 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:59.663 08:45:22 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:06:59.664 00:06:59.664 real 0m6.642s 00:06:59.664 user 0m7.486s 00:06:59.664 sys 0m3.999s 00:06:59.664 08:45:22 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # xtrace_disable 00:06:59.664 08:45:22 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:59.664 ************************************ 00:06:59.664 END TEST nvmf_target_discovery 00:06:59.664 ************************************ 00:06:59.664 08:45:22 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:59.664 08:45:22 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:06:59.664 08:45:22 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:06:59.664 08:45:22 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:59.664 ************************************ 00:06:59.664 START TEST nvmf_referrals 00:06:59.664 ************************************ 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:06:59.664 * Looking for test storage... 00:06:59.664 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:59.664 08:45:22 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:04.935 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:04.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@377 -- # modinfo irdma 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:04.935 Found net devices under 0000:af:00.0: cvl_0_0 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:04.935 Found net devices under 0000:af:00.1: cvl_0_1 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:04.935 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:04.935 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:04.936 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:04.936 altname enp175s0f0np0 00:07:04.936 altname ens801f0np0 00:07:04.936 inet 192.168.100.8/24 scope global cvl_0_0 00:07:04.936 valid_lft forever preferred_lft forever 00:07:04.936 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:04.936 valid_lft forever preferred_lft forever 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:04.936 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:04.936 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:04.936 altname enp175s0f1np1 00:07:04.936 altname ens801f1np1 00:07:04.936 inet 192.168.100.9/24 scope global cvl_0_1 00:07:04.936 valid_lft forever preferred_lft forever 00:07:04.936 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:04.936 valid_lft forever preferred_lft forever 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:04.936 192.168.100.9' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:04.936 192.168.100.9' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:04.936 192.168.100.9' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1187579 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1187579 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # '[' -z 1187579 ']' 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:04.936 08:45:27 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:04.936 [2024-06-09 08:45:27.451489] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:04.936 [2024-06-09 08:45:27.451535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.936 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.195 [2024-06-09 08:45:27.507918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.195 [2024-06-09 08:45:27.588485] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.195 [2024-06-09 08:45:27.588522] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.195 [2024-06-09 08:45:27.588529] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:05.195 [2024-06-09 08:45:27.588535] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:05.195 [2024-06-09 08:45:27.588539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.195 [2024-06-09 08:45:27.588599] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.195 [2024-06-09 08:45:27.588692] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.195 [2024-06-09 08:45:27.588758] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.195 [2024-06-09 08:45:27.588759] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@863 -- # return 0 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:05.765 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:05.765 [2024-06-09 08:45:28.321047] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xd868f0/0xd85f30) succeed. 00:07:06.043 [2024-06-09 08:45:28.330280] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xd87ca0/0xd864b0) succeed. 00:07:06.043 [2024-06-09 08:45:28.330303] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 [2024-06-09 08:45:28.342521] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.043 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.044 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.311 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.570 08:45:28 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:06.570 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:06.570 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:06.570 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:06.570 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:06.571 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.571 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:06.830 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:07.089 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:07.090 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:07.349 rmmod nvme_rdma 00:07:07.349 rmmod nvme_fabrics 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1187579 ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1187579 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@949 -- # '[' -z 1187579 ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@953 -- # kill -0 1187579 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # uname 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1187579 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1187579' 00:07:07.349 killing process with pid 1187579 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@968 -- # kill 1187579 00:07:07.349 08:45:29 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@973 -- # wait 1187579 00:07:07.609 08:45:30 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:07.609 08:45:30 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:07.609 00:07:07.609 real 0m7.988s 00:07:07.609 user 0m12.265s 00:07:07.609 sys 0m4.552s 00:07:07.609 08:45:30 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1125 -- # xtrace_disable 00:07:07.609 08:45:30 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:07.609 ************************************ 00:07:07.609 END TEST nvmf_referrals 00:07:07.609 ************************************ 00:07:07.609 08:45:30 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:07.609 08:45:30 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:07:07.609 08:45:30 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:07:07.609 08:45:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:07.609 ************************************ 00:07:07.609 START TEST nvmf_connect_disconnect 00:07:07.609 ************************************ 00:07:07.609 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:07:07.868 * Looking for test storage... 00:07:07.868 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.868 08:45:30 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:13.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:13.142 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # modinfo irdma 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:13.142 Found net devices under 0000:af:00.0: cvl_0_0 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:13.142 Found net devices under 0000:af:00.1: cvl_0_1 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.142 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:07:13.143 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:13.143 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:07:13.143 altname enp175s0f0np0 00:07:13.143 altname ens801f0np0 00:07:13.143 inet 192.168.100.8/24 scope global cvl_0_0 00:07:13.143 valid_lft forever preferred_lft forever 00:07:13.143 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:07:13.143 valid_lft forever preferred_lft forever 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:07:13.143 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:07:13.143 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:07:13.143 altname enp175s0f1np1 00:07:13.143 altname ens801f1np1 00:07:13.143 inet 192.168.100.9/24 scope global cvl_0_1 00:07:13.143 valid_lft forever preferred_lft forever 00:07:13.143 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:07:13.143 valid_lft forever preferred_lft forever 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:13.143 192.168.100.9' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:13.143 192.168.100.9' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:13.143 192.168.100.9' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@723 -- # xtrace_disable 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1190960 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1190960 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # '[' -z 1190960 ']' 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local max_retries=100 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # xtrace_disable 00:07:13.143 08:45:34 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.143 [2024-06-09 08:45:34.982142] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:07:13.143 [2024-06-09 08:45:34.982184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.143 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.143 [2024-06-09 08:45:35.037755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.143 [2024-06-09 08:45:35.116306] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.143 [2024-06-09 08:45:35.116342] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.143 [2024-06-09 08:45:35.116349] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.143 [2024-06-09 08:45:35.116355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.143 [2024-06-09 08:45:35.116361] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.143 [2024-06-09 08:45:35.116403] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.143 [2024-06-09 08:45:35.116501] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.143 [2024-06-09 08:45:35.116525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.143 [2024-06-09 08:45:35.116526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@863 -- # return 0 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@729 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 [2024-06-09 08:45:35.822559] rdma.c:2724:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:13.403 [2024-06-09 08:45:35.835935] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x5b98f0/0x5b8f30) succeed. 00:07:13.403 [2024-06-09 08:45:35.844706] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x5baca0/0x5b94b0) succeed. 00:07:13.403 [2024-06-09 08:45:35.844729] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:13.403 [2024-06-09 08:45:35.899712] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:13.403 08:45:35 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:16.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:27.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:41.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:07.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:32.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.027 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.015 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:43.272 rmmod nvme_rdma 00:11:43.272 rmmod nvme_fabrics 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1190960 ']' 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1190960 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@949 -- # '[' -z 1190960 ']' 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # kill -0 1190960 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # uname 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1190960 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1190960' 00:11:43.272 killing process with pid 1190960 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # kill 1190960 00:11:43.272 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # wait 1190960 00:11:43.530 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.530 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:43.530 00:11:43.530 real 4m35.742s 00:11:43.530 user 18m1.258s 00:11:43.530 sys 0m15.205s 00:11:43.530 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:43.530 08:50:05 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:43.530 ************************************ 00:11:43.530 END TEST nvmf_connect_disconnect 00:11:43.530 ************************************ 00:11:43.530 08:50:05 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:43.530 08:50:05 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:43.530 08:50:05 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:43.530 08:50:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:43.530 ************************************ 00:11:43.530 START TEST nvmf_multitarget 00:11:43.530 ************************************ 00:11:43.530 08:50:05 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:11:43.530 * Looking for test storage... 00:11:43.530 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:43.530 08:50:06 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:48.796 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:48.796 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@377 -- # modinfo irdma 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.796 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:48.796 Found net devices under 0000:af:00.0: cvl_0_0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:48.797 Found net devices under 0000:af:00.1: cvl_0_1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:11:48.797 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:48.797 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:48.797 altname enp175s0f0np0 00:11:48.797 altname ens801f0np0 00:11:48.797 inet 192.168.100.8/24 scope global cvl_0_0 00:11:48.797 valid_lft forever preferred_lft forever 00:11:48.797 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:48.797 valid_lft forever preferred_lft forever 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:11:48.797 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:48.797 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:48.797 altname enp175s0f1np1 00:11:48.797 altname ens801f1np1 00:11:48.797 inet 192.168.100.9/24 scope global cvl_0_1 00:11:48.797 valid_lft forever preferred_lft forever 00:11:48.797 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:48.797 valid_lft forever preferred_lft forever 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.797 08:50:10 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_0 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo cvl_0_1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:48.797 192.168.100.9' 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:48.797 192.168.100.9' 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:11:48.797 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:48.798 192.168.100.9' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1240394 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1240394 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # '[' -z 1240394 ']' 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:48.798 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:48.798 [2024-06-09 08:50:11.115868] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:11:48.798 [2024-06-09 08:50:11.115911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.798 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.798 [2024-06-09 08:50:11.170773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.798 [2024-06-09 08:50:11.243616] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.798 [2024-06-09 08:50:11.243659] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.798 [2024-06-09 08:50:11.243666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.798 [2024-06-09 08:50:11.243672] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.798 [2024-06-09 08:50:11.243677] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.798 [2024-06-09 08:50:11.243742] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.798 [2024-06-09 08:50:11.243798] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.798 [2024-06-09 08:50:11.243883] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.798 [2024-06-09 08:50:11.243885] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@863 -- # return 0 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.733 08:50:11 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:49.733 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:49.733 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:49.733 "nvmf_tgt_1" 00:11:49.733 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:49.733 "nvmf_tgt_2" 00:11:49.733 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:49.733 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:50.004 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:50.004 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:50.004 true 00:11:50.004 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:50.320 true 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:50.320 rmmod nvme_rdma 00:11:50.320 rmmod nvme_fabrics 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1240394 ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1240394 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@949 -- # '[' -z 1240394 ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@953 -- # kill -0 1240394 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # uname 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1240394 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1240394' 00:11:50.320 killing process with pid 1240394 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@968 -- # kill 1240394 00:11:50.320 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@973 -- # wait 1240394 00:11:50.579 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.579 08:50:12 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:50.579 00:11:50.579 real 0m7.014s 00:11:50.579 user 0m9.023s 00:11:50.579 sys 0m4.232s 00:11:50.579 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1125 -- # xtrace_disable 00:11:50.579 08:50:12 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:50.579 ************************************ 00:11:50.579 END TEST nvmf_multitarget 00:11:50.579 ************************************ 00:11:50.580 08:50:12 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:50.580 08:50:12 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:11:50.580 08:50:12 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:11:50.580 08:50:12 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:50.580 ************************************ 00:11:50.580 START TEST nvmf_rpc 00:11:50.580 ************************************ 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:11:50.580 * Looking for test storage... 00:11:50.580 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:50.580 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.839 08:50:13 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:56.110 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:56.110 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@377 -- # modinfo irdma 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:56.110 Found net devices under 0000:af:00.0: cvl_0_0 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:56.110 Found net devices under 0000:af:00.1: cvl_0_1 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:56.110 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:11:56.111 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:56.111 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:11:56.111 altname enp175s0f0np0 00:11:56.111 altname ens801f0np0 00:11:56.111 inet 192.168.100.8/24 scope global cvl_0_0 00:11:56.111 valid_lft forever preferred_lft forever 00:11:56.111 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:11:56.111 valid_lft forever preferred_lft forever 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:11:56.111 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:11:56.111 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:11:56.111 altname enp175s0f1np1 00:11:56.111 altname ens801f1np1 00:11:56.111 inet 192.168.100.9/24 scope global cvl_0_1 00:11:56.111 valid_lft forever preferred_lft forever 00:11:56.111 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:11:56.111 valid_lft forever preferred_lft forever 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:56.111 192.168.100.9' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:56.111 192.168.100.9' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:56.111 192.168.100.9' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@723 -- # xtrace_disable 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1243690 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1243690 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # '[' -z 1243690 ']' 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@835 -- # local max_retries=100 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@839 -- # xtrace_disable 00:11:56.111 08:50:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.111 [2024-06-09 08:50:18.344585] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:11:56.111 [2024-06-09 08:50:18.344633] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.111 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.111 [2024-06-09 08:50:18.398950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.111 [2024-06-09 08:50:18.473527] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.111 [2024-06-09 08:50:18.473570] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.111 [2024-06-09 08:50:18.473576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.111 [2024-06-09 08:50:18.473582] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.112 [2024-06-09 08:50:18.473587] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.112 [2024-06-09 08:50:18.473651] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.112 [2024-06-09 08:50:18.473668] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.112 [2024-06-09 08:50:18.473771] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.112 [2024-06-09 08:50:18.473774] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@863 -- # return 0 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@729 -- # xtrace_disable 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:56.679 "tick_rate": 2100000000, 00:11:56.679 "poll_groups": [ 00:11:56.679 { 00:11:56.679 "name": "nvmf_tgt_poll_group_000", 00:11:56.679 "admin_qpairs": 0, 00:11:56.679 "io_qpairs": 0, 00:11:56.679 "current_admin_qpairs": 0, 00:11:56.679 "current_io_qpairs": 0, 00:11:56.679 "pending_bdev_io": 0, 00:11:56.679 "completed_nvme_io": 0, 00:11:56.679 "transports": [] 00:11:56.679 }, 00:11:56.679 { 00:11:56.679 "name": "nvmf_tgt_poll_group_001", 00:11:56.679 "admin_qpairs": 0, 00:11:56.679 "io_qpairs": 0, 00:11:56.679 "current_admin_qpairs": 0, 00:11:56.679 "current_io_qpairs": 0, 00:11:56.679 "pending_bdev_io": 0, 00:11:56.679 "completed_nvme_io": 0, 00:11:56.679 "transports": [] 00:11:56.679 }, 00:11:56.679 { 00:11:56.679 "name": "nvmf_tgt_poll_group_002", 00:11:56.679 "admin_qpairs": 0, 00:11:56.679 "io_qpairs": 0, 00:11:56.679 "current_admin_qpairs": 0, 00:11:56.679 "current_io_qpairs": 0, 00:11:56.679 "pending_bdev_io": 0, 00:11:56.679 "completed_nvme_io": 0, 00:11:56.679 "transports": [] 00:11:56.679 }, 00:11:56.679 { 00:11:56.679 "name": "nvmf_tgt_poll_group_003", 00:11:56.679 "admin_qpairs": 0, 00:11:56.679 "io_qpairs": 0, 00:11:56.679 "current_admin_qpairs": 0, 00:11:56.679 "current_io_qpairs": 0, 00:11:56.679 "pending_bdev_io": 0, 00:11:56.679 "completed_nvme_io": 0, 00:11:56.679 "transports": [] 00:11:56.679 } 00:11:56.679 ] 00:11:56.679 }' 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:56.679 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 [2024-06-09 08:50:19.304469] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x90c900/0x90bf40) succeed. 00:11:56.938 [2024-06-09 08:50:19.313343] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x90dc70/0x90c4c0) succeed. 00:11:56.938 [2024-06-09 08:50:19.313365] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:56.938 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:56.938 "tick_rate": 2100000000, 00:11:56.938 "poll_groups": [ 00:11:56.938 { 00:11:56.938 "name": "nvmf_tgt_poll_group_000", 00:11:56.938 "admin_qpairs": 0, 00:11:56.938 "io_qpairs": 0, 00:11:56.938 "current_admin_qpairs": 0, 00:11:56.938 "current_io_qpairs": 0, 00:11:56.938 "pending_bdev_io": 0, 00:11:56.938 "completed_nvme_io": 0, 00:11:56.938 "transports": [ 00:11:56.938 { 00:11:56.938 "trtype": "RDMA", 00:11:56.938 "pending_data_buffer": 0, 00:11:56.938 "devices": [ 00:11:56.938 { 00:11:56.938 "name": "rocep175s0f0", 00:11:56.938 "polls": 1632, 00:11:56.938 "idle_polls": 1632, 00:11:56.938 "completions": 0, 00:11:56.938 "requests": 0, 00:11:56.938 "request_latency": 0, 00:11:56.938 "pending_free_request": 0, 00:11:56.938 "pending_rdma_read": 0, 00:11:56.938 "pending_rdma_write": 0, 00:11:56.938 "pending_rdma_send": 0, 00:11:56.938 "total_send_wrs": 0, 00:11:56.938 "send_doorbell_updates": 0, 00:11:56.938 "total_recv_wrs": 0, 00:11:56.938 "recv_doorbell_updates": 0 00:11:56.938 }, 00:11:56.938 { 00:11:56.938 "name": "rocep175s0f1", 00:11:56.938 "polls": 1632, 00:11:56.938 "idle_polls": 1632, 00:11:56.938 "completions": 0, 00:11:56.938 "requests": 0, 00:11:56.938 "request_latency": 0, 00:11:56.938 "pending_free_request": 0, 00:11:56.938 "pending_rdma_read": 0, 00:11:56.938 "pending_rdma_write": 0, 00:11:56.938 "pending_rdma_send": 0, 00:11:56.938 "total_send_wrs": 0, 00:11:56.938 "send_doorbell_updates": 0, 00:11:56.938 "total_recv_wrs": 0, 00:11:56.938 "recv_doorbell_updates": 0 00:11:56.938 } 00:11:56.938 ] 00:11:56.938 } 00:11:56.938 ] 00:11:56.938 }, 00:11:56.938 { 00:11:56.938 "name": "nvmf_tgt_poll_group_001", 00:11:56.938 "admin_qpairs": 0, 00:11:56.938 "io_qpairs": 0, 00:11:56.938 "current_admin_qpairs": 0, 00:11:56.938 "current_io_qpairs": 0, 00:11:56.938 "pending_bdev_io": 0, 00:11:56.938 "completed_nvme_io": 0, 00:11:56.938 "transports": [ 00:11:56.938 { 00:11:56.938 "trtype": "RDMA", 00:11:56.938 "pending_data_buffer": 0, 00:11:56.938 "devices": [ 00:11:56.938 { 00:11:56.938 "name": "rocep175s0f0", 00:11:56.938 "polls": 1569, 00:11:56.938 "idle_polls": 1569, 00:11:56.938 "completions": 0, 00:11:56.938 "requests": 0, 00:11:56.938 "request_latency": 0, 00:11:56.938 "pending_free_request": 0, 00:11:56.938 "pending_rdma_read": 0, 00:11:56.938 "pending_rdma_write": 0, 00:11:56.938 "pending_rdma_send": 0, 00:11:56.938 "total_send_wrs": 0, 00:11:56.938 "send_doorbell_updates": 0, 00:11:56.938 "total_recv_wrs": 0, 00:11:56.938 "recv_doorbell_updates": 0 00:11:56.938 }, 00:11:56.938 { 00:11:56.938 "name": "rocep175s0f1", 00:11:56.938 "polls": 1569, 00:11:56.938 "idle_polls": 1569, 00:11:56.938 "completions": 0, 00:11:56.938 "requests": 0, 00:11:56.938 "request_latency": 0, 00:11:56.938 "pending_free_request": 0, 00:11:56.938 "pending_rdma_read": 0, 00:11:56.938 "pending_rdma_write": 0, 00:11:56.938 "pending_rdma_send": 0, 00:11:56.938 "total_send_wrs": 0, 00:11:56.938 "send_doorbell_updates": 0, 00:11:56.938 "total_recv_wrs": 0, 00:11:56.938 "recv_doorbell_updates": 0 00:11:56.938 } 00:11:56.938 ] 00:11:56.938 } 00:11:56.938 ] 00:11:56.938 }, 00:11:56.938 { 00:11:56.938 "name": "nvmf_tgt_poll_group_002", 00:11:56.938 "admin_qpairs": 0, 00:11:56.938 "io_qpairs": 0, 00:11:56.938 "current_admin_qpairs": 0, 00:11:56.938 "current_io_qpairs": 0, 00:11:56.938 "pending_bdev_io": 0, 00:11:56.938 "completed_nvme_io": 0, 00:11:56.938 "transports": [ 00:11:56.938 { 00:11:56.938 "trtype": "RDMA", 00:11:56.938 "pending_data_buffer": 0, 00:11:56.938 "devices": [ 00:11:56.938 { 00:11:56.938 "name": "rocep175s0f0", 00:11:56.938 "polls": 1446, 00:11:56.938 "idle_polls": 1446, 00:11:56.938 "completions": 0, 00:11:56.938 "requests": 0, 00:11:56.938 "request_latency": 0, 00:11:56.938 "pending_free_request": 0, 00:11:56.938 "pending_rdma_read": 0, 00:11:56.938 "pending_rdma_write": 0, 00:11:56.938 "pending_rdma_send": 0, 00:11:56.938 "total_send_wrs": 0, 00:11:56.939 "send_doorbell_updates": 0, 00:11:56.939 "total_recv_wrs": 0, 00:11:56.939 "recv_doorbell_updates": 0 00:11:56.939 }, 00:11:56.939 { 00:11:56.939 "name": "rocep175s0f1", 00:11:56.939 "polls": 1446, 00:11:56.939 "idle_polls": 1446, 00:11:56.939 "completions": 0, 00:11:56.939 "requests": 0, 00:11:56.939 "request_latency": 0, 00:11:56.939 "pending_free_request": 0, 00:11:56.939 "pending_rdma_read": 0, 00:11:56.939 "pending_rdma_write": 0, 00:11:56.939 "pending_rdma_send": 0, 00:11:56.939 "total_send_wrs": 0, 00:11:56.939 "send_doorbell_updates": 0, 00:11:56.939 "total_recv_wrs": 0, 00:11:56.939 "recv_doorbell_updates": 0 00:11:56.939 } 00:11:56.939 ] 00:11:56.939 } 00:11:56.939 ] 00:11:56.939 }, 00:11:56.939 { 00:11:56.939 "name": "nvmf_tgt_poll_group_003", 00:11:56.939 "admin_qpairs": 0, 00:11:56.939 "io_qpairs": 0, 00:11:56.939 "current_admin_qpairs": 0, 00:11:56.939 "current_io_qpairs": 0, 00:11:56.939 "pending_bdev_io": 0, 00:11:56.939 "completed_nvme_io": 0, 00:11:56.939 "transports": [ 00:11:56.939 { 00:11:56.939 "trtype": "RDMA", 00:11:56.939 "pending_data_buffer": 0, 00:11:56.939 "devices": [ 00:11:56.939 { 00:11:56.939 "name": "rocep175s0f0", 00:11:56.939 "polls": 1057, 00:11:56.939 "idle_polls": 1057, 00:11:56.939 "completions": 0, 00:11:56.939 "requests": 0, 00:11:56.939 "request_latency": 0, 00:11:56.939 "pending_free_request": 0, 00:11:56.939 "pending_rdma_read": 0, 00:11:56.939 "pending_rdma_write": 0, 00:11:56.939 "pending_rdma_send": 0, 00:11:56.939 "total_send_wrs": 0, 00:11:56.939 "send_doorbell_updates": 0, 00:11:56.939 "total_recv_wrs": 0, 00:11:56.939 "recv_doorbell_updates": 0 00:11:56.939 }, 00:11:56.939 { 00:11:56.939 "name": "rocep175s0f1", 00:11:56.939 "polls": 1057, 00:11:56.939 "idle_polls": 1057, 00:11:56.939 "completions": 0, 00:11:56.939 "requests": 0, 00:11:56.939 "request_latency": 0, 00:11:56.939 "pending_free_request": 0, 00:11:56.939 "pending_rdma_read": 0, 00:11:56.939 "pending_rdma_write": 0, 00:11:56.939 "pending_rdma_send": 0, 00:11:56.939 "total_send_wrs": 0, 00:11:56.939 "send_doorbell_updates": 0, 00:11:56.939 "total_recv_wrs": 0, 00:11:56.939 "recv_doorbell_updates": 0 00:11:56.939 } 00:11:56.939 ] 00:11:56.939 } 00:11:56.939 ] 00:11:56.939 } 00:11:56.939 ] 00:11:56.939 }' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:11:56.939 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 Malloc1 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 [2024-06-09 08:50:19.629402] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:11:57.198 [2024-06-09 08:50:19.667521] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:11:57.198 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:57.198 could not add new controller: failed to write to nvme-fabrics device 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:57.198 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:57.456 08:50:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.456 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:11:57.456 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.456 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:11:57.456 08:50:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:11:59.986 08:50:21 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:00.552 [2024-06-09 08:50:22.895676] ctrlr.c: 818:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562' 00:12:00.552 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:00.552 could not add new controller: failed to write to nvme-fabrics device 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:00.552 08:50:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:00.809 08:50:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.809 08:50:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:00.809 08:50:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.810 08:50:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:00.810 08:50:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:02.709 08:50:25 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:03.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 [2024-06-09 08:50:26.104552] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:03.644 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:03.903 08:50:26 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.903 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:03.903 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.903 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:03.903 08:50:26 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:05.805 08:50:28 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 [2024-06-09 08:50:29.260547] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:06.741 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:07.000 08:50:29 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.000 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:07.000 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.000 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:07.000 08:50:29 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:09.533 08:50:31 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 [2024-06-09 08:50:32.405894] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:10.050 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:10.308 08:50:32 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.308 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:10.308 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.308 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:10.308 08:50:32 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:12.209 08:50:34 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:13.145 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 [2024-06-09 08:50:35.558327] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:13.146 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:13.405 08:50:35 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.405 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:13.405 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.405 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:13.405 08:50:35 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:15.309 08:50:37 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 [2024-06-09 08:50:38.705202] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:16.246 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:16.507 08:50:38 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:16.507 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1197 -- # local i=0 00:12:16.507 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.507 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:12:16.507 08:50:38 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1204 -- # sleep 2 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1207 -- # return 0 00:12:18.435 08:50:40 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1218 -- # local i=0 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1230 -- # return 0 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 [2024-06-09 08:50:41.875326] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.373 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.373 [2024-06-09 08:50:41.927512] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 [2024-06-09 08:50:41.979798] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:41 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 [2024-06-09 08:50:42.028044] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:19.633 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 [2024-06-09 08:50:42.076216] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:19.634 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:19.634 "tick_rate": 2100000000, 00:12:19.634 "poll_groups": [ 00:12:19.634 { 00:12:19.634 "name": "nvmf_tgt_poll_group_000", 00:12:19.634 "admin_qpairs": 2, 00:12:19.634 "io_qpairs": 27, 00:12:19.634 "current_admin_qpairs": 0, 00:12:19.634 "current_io_qpairs": 0, 00:12:19.634 "pending_bdev_io": 0, 00:12:19.634 "completed_nvme_io": 126, 00:12:19.634 "transports": [ 00:12:19.634 { 00:12:19.634 "trtype": "RDMA", 00:12:19.634 "pending_data_buffer": 0, 00:12:19.634 "devices": [ 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f0", 00:12:19.634 "polls": 2758506, 00:12:19.634 "idle_polls": 2758074, 00:12:19.634 "completions": 3905, 00:12:19.634 "requests": 3727, 00:12:19.634 "request_latency": 422177792, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 301, 00:12:19.634 "send_doorbell_updates": 153, 00:12:19.634 "total_recv_wrs": 3727, 00:12:19.634 "recv_doorbell_updates": 180 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f1", 00:12:19.634 "polls": 2758506, 00:12:19.634 "idle_polls": 2758506, 00:12:19.634 "completions": 0, 00:12:19.634 "requests": 0, 00:12:19.634 "request_latency": 0, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 0, 00:12:19.634 "send_doorbell_updates": 0, 00:12:19.634 "total_recv_wrs": 0, 00:12:19.634 "recv_doorbell_updates": 0 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "nvmf_tgt_poll_group_001", 00:12:19.634 "admin_qpairs": 2, 00:12:19.634 "io_qpairs": 26, 00:12:19.634 "current_admin_qpairs": 0, 00:12:19.634 "current_io_qpairs": 0, 00:12:19.634 "pending_bdev_io": 0, 00:12:19.634 "completed_nvme_io": 80, 00:12:19.634 "transports": [ 00:12:19.634 { 00:12:19.634 "trtype": "RDMA", 00:12:19.634 "pending_data_buffer": 0, 00:12:19.634 "devices": [ 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f0", 00:12:19.634 "polls": 2777081, 00:12:19.634 "idle_polls": 2776729, 00:12:19.634 "completions": 3652, 00:12:19.634 "requests": 3521, 00:12:19.634 "request_latency": 390821848, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 210, 00:12:19.634 "send_doorbell_updates": 119, 00:12:19.634 "total_recv_wrs": 3521, 00:12:19.634 "recv_doorbell_updates": 145 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f1", 00:12:19.634 "polls": 2777081, 00:12:19.634 "idle_polls": 2777081, 00:12:19.634 "completions": 0, 00:12:19.634 "requests": 0, 00:12:19.634 "request_latency": 0, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 0, 00:12:19.634 "send_doorbell_updates": 0, 00:12:19.634 "total_recv_wrs": 0, 00:12:19.634 "recv_doorbell_updates": 0 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "nvmf_tgt_poll_group_002", 00:12:19.634 "admin_qpairs": 1, 00:12:19.634 "io_qpairs": 26, 00:12:19.634 "current_admin_qpairs": 0, 00:12:19.634 "current_io_qpairs": 0, 00:12:19.634 "pending_bdev_io": 0, 00:12:19.634 "completed_nvme_io": 124, 00:12:19.634 "transports": [ 00:12:19.634 { 00:12:19.634 "trtype": "RDMA", 00:12:19.634 "pending_data_buffer": 0, 00:12:19.634 "devices": [ 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f0", 00:12:19.634 "polls": 2798851, 00:12:19.634 "idle_polls": 2798479, 00:12:19.634 "completions": 3692, 00:12:19.634 "requests": 3541, 00:12:19.634 "request_latency": 404811408, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 261, 00:12:19.634 "send_doorbell_updates": 127, 00:12:19.634 "total_recv_wrs": 3541, 00:12:19.634 "recv_doorbell_updates": 153 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "rocep175s0f1", 00:12:19.634 "polls": 2798851, 00:12:19.634 "idle_polls": 2798851, 00:12:19.634 "completions": 0, 00:12:19.634 "requests": 0, 00:12:19.634 "request_latency": 0, 00:12:19.634 "pending_free_request": 0, 00:12:19.634 "pending_rdma_read": 0, 00:12:19.634 "pending_rdma_write": 0, 00:12:19.634 "pending_rdma_send": 0, 00:12:19.634 "total_send_wrs": 0, 00:12:19.634 "send_doorbell_updates": 0, 00:12:19.634 "total_recv_wrs": 0, 00:12:19.634 "recv_doorbell_updates": 0 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 } 00:12:19.634 ] 00:12:19.634 }, 00:12:19.634 { 00:12:19.634 "name": "nvmf_tgt_poll_group_003", 00:12:19.634 "admin_qpairs": 2, 00:12:19.634 "io_qpairs": 26, 00:12:19.634 "current_admin_qpairs": 0, 00:12:19.634 "current_io_qpairs": 0, 00:12:19.634 "pending_bdev_io": 0, 00:12:19.634 "completed_nvme_io": 125, 00:12:19.634 "transports": [ 00:12:19.634 { 00:12:19.634 "trtype": "RDMA", 00:12:19.634 "pending_data_buffer": 0, 00:12:19.634 "devices": [ 00:12:19.635 { 00:12:19.635 "name": "rocep175s0f0", 00:12:19.635 "polls": 2176823, 00:12:19.635 "idle_polls": 2176400, 00:12:19.635 "completions": 3740, 00:12:19.635 "requests": 3565, 00:12:19.635 "request_latency": 408674090, 00:12:19.635 "pending_free_request": 0, 00:12:19.635 "pending_rdma_read": 0, 00:12:19.635 "pending_rdma_write": 0, 00:12:19.635 "pending_rdma_send": 0, 00:12:19.635 "total_send_wrs": 298, 00:12:19.635 "send_doorbell_updates": 149, 00:12:19.635 "total_recv_wrs": 3565, 00:12:19.635 "recv_doorbell_updates": 175 00:12:19.635 }, 00:12:19.635 { 00:12:19.635 "name": "rocep175s0f1", 00:12:19.635 "polls": 2176823, 00:12:19.635 "idle_polls": 2176823, 00:12:19.635 "completions": 0, 00:12:19.635 "requests": 0, 00:12:19.635 "request_latency": 0, 00:12:19.635 "pending_free_request": 0, 00:12:19.635 "pending_rdma_read": 0, 00:12:19.635 "pending_rdma_write": 0, 00:12:19.635 "pending_rdma_send": 0, 00:12:19.635 "total_send_wrs": 0, 00:12:19.635 "send_doorbell_updates": 0, 00:12:19.635 "total_recv_wrs": 0, 00:12:19.635 "recv_doorbell_updates": 0 00:12:19.635 } 00:12:19.635 ] 00:12:19.635 } 00:12:19.635 ] 00:12:19.635 } 00:12:19.635 ] 00:12:19.635 }' 00:12:19.635 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:19.635 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:19.635 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:19.635 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 14989 > 0 )) 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 1626485138 > 0 )) 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:19.894 rmmod nvme_rdma 00:12:19.894 rmmod nvme_fabrics 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1243690 ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1243690 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@949 -- # '[' -z 1243690 ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@953 -- # kill -0 1243690 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # uname 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1243690 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1243690' 00:12:19.894 killing process with pid 1243690 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@968 -- # kill 1243690 00:12:19.894 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@973 -- # wait 1243690 00:12:20.154 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:20.154 08:50:42 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:20.154 00:12:20.154 real 0m29.621s 00:12:20.154 user 1m38.429s 00:12:20.154 sys 0m5.441s 00:12:20.154 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:20.154 08:50:42 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.154 ************************************ 00:12:20.154 END TEST nvmf_rpc 00:12:20.154 ************************************ 00:12:20.154 08:50:42 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:20.154 08:50:42 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:20.154 08:50:42 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:20.154 08:50:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:20.413 ************************************ 00:12:20.413 START TEST nvmf_invalid 00:12:20.413 ************************************ 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:12:20.413 * Looking for test storage... 00:12:20.413 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:20.413 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:20.414 08:50:42 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:25.689 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:25.689 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@377 -- # modinfo irdma 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:25.689 Found net devices under 0000:af:00.0: cvl_0_0 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:25.689 Found net devices under 0000:af:00.1: cvl_0_1 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.689 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:12:25.690 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:25.690 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:25.690 altname enp175s0f0np0 00:12:25.690 altname ens801f0np0 00:12:25.690 inet 192.168.100.8/24 scope global cvl_0_0 00:12:25.690 valid_lft forever preferred_lft forever 00:12:25.690 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:25.690 valid_lft forever preferred_lft forever 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:12:25.690 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:25.690 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:25.690 altname enp175s0f1np1 00:12:25.690 altname ens801f1np1 00:12:25.690 inet 192.168.100.9/24 scope global cvl_0_1 00:12:25.690 valid_lft forever preferred_lft forever 00:12:25.690 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:25.690 valid_lft forever preferred_lft forever 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:25.690 192.168.100.9' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:25.690 192.168.100.9' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:25.690 192.168.100.9' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1250759 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1250759 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # '[' -z 1250759 ']' 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:25.690 08:50:47 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.690 [2024-06-09 08:50:47.413459] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:25.690 [2024-06-09 08:50:47.413501] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.691 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.691 [2024-06-09 08:50:47.467391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.691 [2024-06-09 08:50:47.539065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.691 [2024-06-09 08:50:47.539106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.691 [2024-06-09 08:50:47.539113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.691 [2024-06-09 08:50:47.539119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.691 [2024-06-09 08:50:47.539123] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.691 [2024-06-09 08:50:47.539188] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.691 [2024-06-09 08:50:47.539284] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.691 [2024-06-09 08:50:47.539354] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.691 [2024-06-09 08:50:47.539355] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@863 -- # return 0 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.691 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.949 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24938 00:12:25.949 [2024-06-09 08:50:48.398104] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:25.949 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:25.949 { 00:12:25.949 "nqn": "nqn.2016-06.io.spdk:cnode24938", 00:12:25.949 "tgt_name": "foobar", 00:12:25.949 "method": "nvmf_create_subsystem", 00:12:25.949 "req_id": 1 00:12:25.949 } 00:12:25.949 Got JSON-RPC error response 00:12:25.949 response: 00:12:25.949 { 00:12:25.949 "code": -32603, 00:12:25.949 "message": "Unable to find target foobar" 00:12:25.949 }' 00:12:25.950 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:25.950 { 00:12:25.950 "nqn": "nqn.2016-06.io.spdk:cnode24938", 00:12:25.950 "tgt_name": "foobar", 00:12:25.950 "method": "nvmf_create_subsystem", 00:12:25.950 "req_id": 1 00:12:25.950 } 00:12:25.950 Got JSON-RPC error response 00:12:25.950 response: 00:12:25.950 { 00:12:25.950 "code": -32603, 00:12:25.950 "message": "Unable to find target foobar" 00:12:25.950 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:25.950 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:25.950 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3085 00:12:26.208 [2024-06-09 08:50:48.574745] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3085: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:26.208 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:26.208 { 00:12:26.209 "nqn": "nqn.2016-06.io.spdk:cnode3085", 00:12:26.209 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.209 "method": "nvmf_create_subsystem", 00:12:26.209 "req_id": 1 00:12:26.209 } 00:12:26.209 Got JSON-RPC error response 00:12:26.209 response: 00:12:26.209 { 00:12:26.209 "code": -32602, 00:12:26.209 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.209 }' 00:12:26.209 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:26.209 { 00:12:26.209 "nqn": "nqn.2016-06.io.spdk:cnode3085", 00:12:26.209 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:26.209 "method": "nvmf_create_subsystem", 00:12:26.209 "req_id": 1 00:12:26.209 } 00:12:26.209 Got JSON-RPC error response 00:12:26.209 response: 00:12:26.209 { 00:12:26.209 "code": -32602, 00:12:26.209 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:26.209 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.209 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:26.209 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11346 00:12:26.209 [2024-06-09 08:50:48.763381] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11346: invalid model number 'SPDK_Controller' 00:12:26.468 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:26.468 { 00:12:26.468 "nqn": "nqn.2016-06.io.spdk:cnode11346", 00:12:26.468 "model_number": "SPDK_Controller\u001f", 00:12:26.468 "method": "nvmf_create_subsystem", 00:12:26.468 "req_id": 1 00:12:26.468 } 00:12:26.468 Got JSON-RPC error response 00:12:26.468 response: 00:12:26.468 { 00:12:26.468 "code": -32602, 00:12:26.468 "message": "Invalid MN SPDK_Controller\u001f" 00:12:26.468 }' 00:12:26.468 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:26.468 { 00:12:26.468 "nqn": "nqn.2016-06.io.spdk:cnode11346", 00:12:26.468 "model_number": "SPDK_Controller\u001f", 00:12:26.468 "method": "nvmf_create_subsystem", 00:12:26.468 "req_id": 1 00:12:26.468 } 00:12:26.469 Got JSON-RPC error response 00:12:26.469 response: 00:12:26.469 { 00:12:26.469 "code": -32602, 00:12:26.469 "message": "Invalid MN SPDK_Controller\u001f" 00:12:26.469 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:26.469 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ A == \- ]] 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'A@HHh$P&yHu(<(TqA;`dI' 00:12:26.470 08:50:48 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'A@HHh$P&yHu(<(TqA;`dI' nqn.2016-06.io.spdk:cnode30025 00:12:26.730 [2024-06-09 08:50:49.080477] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30025: invalid serial number 'A@HHh$P&yHu(<(TqA;`dI' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:26.730 { 00:12:26.730 "nqn": "nqn.2016-06.io.spdk:cnode30025", 00:12:26.730 "serial_number": "A@HHh$P&yHu(<(TqA;`dI", 00:12:26.730 "method": "nvmf_create_subsystem", 00:12:26.730 "req_id": 1 00:12:26.730 } 00:12:26.730 Got JSON-RPC error response 00:12:26.730 response: 00:12:26.730 { 00:12:26.730 "code": -32602, 00:12:26.730 "message": "Invalid SN A@HHh$P&yHu(<(TqA;`dI" 00:12:26.730 }' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:26.730 { 00:12:26.730 "nqn": "nqn.2016-06.io.spdk:cnode30025", 00:12:26.730 "serial_number": "A@HHh$P&yHu(<(TqA;`dI", 00:12:26.730 "method": "nvmf_create_subsystem", 00:12:26.730 "req_id": 1 00:12:26.730 } 00:12:26.730 Got JSON-RPC error response 00:12:26.730 response: 00:12:26.730 { 00:12:26.730 "code": -32602, 00:12:26.730 "message": "Invalid SN A@HHh$P&yHu(<(TqA;`dI" 00:12:26.730 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.730 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.731 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:26.991 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:26.992 08:50:49 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo '%T=Q~[.={X*4xS2[b% R]\UQ|f\Q6 /dev/null' 00:12:29.326 08:50:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.326 08:50:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:29.326 08:50:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:29.326 08:50:51 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:29.326 08:50:51 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:34.629 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:34.629 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@377 -- # modinfo irdma 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:34.629 Found net devices under 0000:af:00.0: cvl_0_0 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:34.629 Found net devices under 0000:af:00.1: cvl_0_1 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.629 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:12:34.630 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:34.630 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:34.630 altname enp175s0f0np0 00:12:34.630 altname ens801f0np0 00:12:34.630 inet 192.168.100.8/24 scope global cvl_0_0 00:12:34.630 valid_lft forever preferred_lft forever 00:12:34.630 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:34.630 valid_lft forever preferred_lft forever 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:12:34.630 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:34.630 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:34.630 altname enp175s0f1np1 00:12:34.630 altname ens801f1np1 00:12:34.630 inet 192.168.100.9/24 scope global cvl_0_1 00:12:34.630 valid_lft forever preferred_lft forever 00:12:34.630 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:34.630 valid_lft forever preferred_lft forever 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:34.630 08:50:56 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:34.630 192.168.100.9' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:34.630 192.168.100.9' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:34.630 192.168.100.9' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1254640 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1254640 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # '[' -z 1254640 ']' 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:34.630 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:34.630 [2024-06-09 08:50:57.122424] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:34.630 [2024-06-09 08:50:57.122477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.630 EAL: No free 2048 kB hugepages reported on node 1 00:12:34.630 [2024-06-09 08:50:57.178294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:34.889 [2024-06-09 08:50:57.251977] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.889 [2024-06-09 08:50:57.252018] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.889 [2024-06-09 08:50:57.252024] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.889 [2024-06-09 08:50:57.252030] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.889 [2024-06-09 08:50:57.252034] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.889 [2024-06-09 08:50:57.252146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.889 [2024-06-09 08:50:57.252252] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.889 [2024-06-09 08:50:57.252253] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@863 -- # return 0 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.456 [2024-06-09 08:50:57.980402] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x17240d0/0x1723710) succeed. 00:12:35.456 [2024-06-09 08:50:57.989151] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1725400/0x1723c90) succeed. 00:12:35.456 [2024-06-09 08:50:57.989172] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.456 08:50:57 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 Malloc0 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 Delay0 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 [2024-06-09 08:50:58.059575] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:35.715 08:50:58 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:35.715 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.715 [2024-06-09 08:50:58.134171] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:38.248 Initializing NVMe Controllers 00:12:38.248 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:38.248 controller IO queue size 128 less than required 00:12:38.248 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:38.248 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:38.248 Initialization complete. Launching workers. 00:12:38.248 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 52073 00:12:38.248 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 52134, failed to submit 62 00:12:38.248 success 52074, unsuccess 60, failed 0 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:38.248 rmmod nvme_rdma 00:12:38.248 rmmod nvme_fabrics 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1254640 ']' 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1254640 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@949 -- # '[' -z 1254640 ']' 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@953 -- # kill -0 1254640 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # uname 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:12:38.248 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1254640 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1254640' 00:12:38.249 killing process with pid 1254640 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@968 -- # kill 1254640 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@973 -- # wait 1254640 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:38.249 00:12:38.249 real 0m9.032s 00:12:38.249 user 0m13.723s 00:12:38.249 sys 0m4.436s 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1125 -- # xtrace_disable 00:12:38.249 08:51:00 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:38.249 ************************************ 00:12:38.249 END TEST nvmf_abort 00:12:38.249 ************************************ 00:12:38.249 08:51:00 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:12:38.249 08:51:00 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:12:38.249 08:51:00 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:12:38.249 08:51:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:38.249 ************************************ 00:12:38.249 START TEST nvmf_ns_hotplug_stress 00:12:38.249 ************************************ 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:12:38.249 * Looking for test storage... 00:12:38.249 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:38.249 08:51:00 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:43.523 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:43.523 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # modinfo irdma 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:43.523 Found net devices under 0000:af:00.0: cvl_0_0 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:43.523 Found net devices under 0000:af:00.1: cvl_0_1 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:43.523 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:12:43.524 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:43.524 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:12:43.524 altname enp175s0f0np0 00:12:43.524 altname ens801f0np0 00:12:43.524 inet 192.168.100.8/24 scope global cvl_0_0 00:12:43.524 valid_lft forever preferred_lft forever 00:12:43.524 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:12:43.524 valid_lft forever preferred_lft forever 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:12:43.524 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:12:43.524 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:12:43.524 altname enp175s0f1np1 00:12:43.524 altname ens801f1np1 00:12:43.524 inet 192.168.100.9/24 scope global cvl_0_1 00:12:43.524 valid_lft forever preferred_lft forever 00:12:43.524 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:12:43.524 valid_lft forever preferred_lft forever 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:43.524 192.168.100.9' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:43.524 192.168.100.9' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:43.524 192.168.100.9' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1258274 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1258274 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # '[' -z 1258274 ']' 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.524 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:12:43.525 08:51:05 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.525 [2024-06-09 08:51:05.567510] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:12:43.525 [2024-06-09 08:51:05.567559] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:43.525 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.525 [2024-06-09 08:51:05.624936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:43.525 [2024-06-09 08:51:05.699514] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:43.525 [2024-06-09 08:51:05.699552] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:43.525 [2024-06-09 08:51:05.699560] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:43.525 [2024-06-09 08:51:05.699566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:43.525 [2024-06-09 08:51:05.699570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:43.525 [2024-06-09 08:51:05.699667] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:12:43.525 [2024-06-09 08:51:05.699754] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:12:43.525 [2024-06-09 08:51:05.699755] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@863 -- # return 0 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:44.092 [2024-06-09 08:51:06.580077] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x71f0d0/0x71e710) succeed. 00:12:44.092 [2024-06-09 08:51:06.588927] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x720400/0x71ec90) succeed. 00:12:44.092 [2024-06-09 08:51:06.588949] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:12:44.092 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:44.350 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:44.609 [2024-06-09 08:51:06.954392] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:44.609 08:51:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:44.867 08:51:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:44.867 Malloc0 00:12:44.867 08:51:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:45.125 Delay0 00:12:45.125 08:51:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:45.383 08:51:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:45.383 NULL1 00:12:45.383 08:51:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:45.658 08:51:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:45.658 08:51:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1258752 00:12:45.658 08:51:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:45.658 08:51:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:45.658 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.071 Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 08:51:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:47.071 08:51:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:47.071 08:51:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:47.071 true 00:12:47.329 08:51:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:47.329 08:51:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 08:51:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:48.265 08:51:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:48.265 08:51:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:48.592 true 00:12:48.593 08:51:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:48.593 08:51:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 08:51:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:49.528 08:51:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:49.528 08:51:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:49.528 true 00:12:49.786 08:51:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:49.786 08:51:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.354 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.612 08:51:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.613 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.613 08:51:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:50.613 08:51:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:50.870 true 00:12:50.871 08:51:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:50.871 08:51:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 08:51:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.805 08:51:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:51.805 08:51:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:52.063 true 00:12:52.063 08:51:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:52.063 08:51:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:53.000 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.000 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:53.000 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:53.258 true 00:12:53.258 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:53.258 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.517 08:51:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.517 08:51:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:53.517 08:51:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:53.775 true 00:12:53.775 08:51:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:53.775 08:51:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 08:51:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.152 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.153 08:51:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:55.153 08:51:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:55.153 true 00:12:55.153 08:51:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:55.153 08:51:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.087 08:51:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.346 08:51:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:56.346 08:51:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:56.346 true 00:12:56.346 08:51:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:56.346 08:51:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.280 08:51:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.539 08:51:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:57.539 08:51:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:57.539 true 00:12:57.539 08:51:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:57.539 08:51:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.474 08:51:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.731 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.731 08:51:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:58.731 08:51:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:58.731 true 00:12:58.989 08:51:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:12:58.989 08:51:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 08:51:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:59.815 08:51:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:59.815 08:51:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:00.074 true 00:13:00.074 08:51:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:00.074 08:51:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.010 08:51:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.010 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:01.010 08:51:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:01.010 08:51:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:01.268 true 00:13:01.268 08:51:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:01.268 08:51:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 08:51:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:02.201 08:51:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:02.201 08:51:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:02.459 true 00:13:02.459 08:51:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:02.459 08:51:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 08:51:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:03.393 08:51:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:03.393 08:51:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:03.651 true 00:13:03.651 08:51:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:03.651 08:51:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 08:51:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.588 08:51:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:04.588 08:51:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:04.846 true 00:13:04.846 08:51:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:04.846 08:51:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 08:51:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:05.782 08:51:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:05.782 08:51:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:06.041 true 00:13:06.041 08:51:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:06.041 08:51:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 08:51:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.977 08:51:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:06.977 08:51:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:07.236 true 00:13:07.236 08:51:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:07.236 08:51:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.171 08:51:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.171 08:51:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:08.171 08:51:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:08.430 true 00:13:08.430 08:51:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:08.430 08:51:30 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.688 08:51:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.688 08:51:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:08.688 08:51:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:08.947 true 00:13:08.947 08:51:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:08.947 08:51:31 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 08:51:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:10.323 08:51:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:10.323 08:51:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:10.582 true 00:13:10.582 08:51:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:10.582 08:51:32 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 08:51:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.518 08:51:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:11.518 08:51:33 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:11.518 true 00:13:11.776 08:51:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:11.776 08:51:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 08:51:34 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.712 08:51:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:12.712 08:51:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:12.712 true 00:13:12.969 08:51:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:12.969 08:51:35 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 08:51:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.902 08:51:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:13.902 08:51:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:13.902 true 00:13:14.160 08:51:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:14.160 08:51:36 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 08:51:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.094 08:51:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:15.094 08:51:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:15.353 true 00:13:15.353 08:51:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:15.353 08:51:37 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.287 08:51:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.287 08:51:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:16.287 08:51:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:16.287 true 00:13:16.545 08:51:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:16.545 08:51:38 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.545 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.803 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:16.803 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:17.061 true 00:13:17.061 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:17.061 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.061 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.319 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:17.319 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:17.578 true 00:13:17.578 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:17.578 08:51:39 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.578 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.836 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:17.836 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:18.094 true 00:13:18.094 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:18.094 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.094 Initializing NVMe Controllers 00:13:18.094 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:18.094 Controller IO queue size 128, less than required. 00:13:18.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:18.094 Controller IO queue size 128, less than required. 00:13:18.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:18.094 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:18.094 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:18.094 Initialization complete. Launching workers. 00:13:18.094 ======================================================== 00:13:18.094 Latency(us) 00:13:18.094 Device Information : IOPS MiB/s Average min max 00:13:18.094 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5231.13 2.55 22237.10 974.57 1138343.58 00:13:18.094 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35352.43 17.26 3620.55 2249.23 296455.30 00:13:18.094 ======================================================== 00:13:18.094 Total : 40583.57 19.82 6020.18 974.57 1138343.58 00:13:18.094 00:13:18.353 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.353 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:18.353 08:51:40 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:18.611 true 00:13:18.611 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1258752 00:13:18.611 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1258752) - No such process 00:13:18.611 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1258752 00:13:18.611 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:18.869 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:19.128 null0 00:13:19.128 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:19.128 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:19.128 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:19.386 null1 00:13:19.386 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:19.386 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:19.386 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:19.387 null2 00:13:19.645 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:19.645 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:19.645 08:51:41 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:19.645 null3 00:13:19.645 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:19.645 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:19.645 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:19.903 null4 00:13:19.903 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:19.903 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:19.903 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:19.903 null5 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:20.162 null6 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.162 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:20.422 null7 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.422 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1264878 1264879 1264882 1264883 1264885 1264887 1264889 1264891 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.423 08:51:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.681 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.939 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:20.940 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.199 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.458 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.459 08:51:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.718 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.976 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.234 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.235 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.493 08:51:44 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.752 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.011 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.269 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.528 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.529 08:51:45 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.529 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.787 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.788 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:24.045 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:24.045 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:24.045 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.045 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.045 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.046 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:24.304 rmmod nvme_rdma 00:13:24.304 rmmod nvme_fabrics 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:24.304 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1258274 ']' 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1258274 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@949 -- # '[' -z 1258274 ']' 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # kill -0 1258274 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # uname 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1258274 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1258274' 00:13:24.305 killing process with pid 1258274 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # kill 1258274 00:13:24.305 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # wait 1258274 00:13:24.564 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:24.564 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:24.564 00:13:24.564 real 0m46.293s 00:13:24.564 user 3m17.426s 00:13:24.564 sys 0m11.292s 00:13:24.564 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:24.564 08:51:46 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.564 ************************************ 00:13:24.564 END TEST nvmf_ns_hotplug_stress 00:13:24.564 ************************************ 00:13:24.564 08:51:46 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:24.564 08:51:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:24.564 08:51:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:24.564 08:51:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:24.564 ************************************ 00:13:24.564 START TEST nvmf_connect_stress 00:13:24.564 ************************************ 00:13:24.564 08:51:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:24.564 * Looking for test storage... 00:13:24.564 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.564 08:51:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:24.565 08:51:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:29.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.842 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:29.843 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@377 -- # modinfo irdma 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:29.843 Found net devices under 0000:af:00.0: cvl_0_0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:29.843 Found net devices under 0000:af:00.1: cvl_0_1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:29.843 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:29.843 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:29.843 altname enp175s0f0np0 00:13:29.843 altname ens801f0np0 00:13:29.843 inet 192.168.100.8/24 scope global cvl_0_0 00:13:29.843 valid_lft forever preferred_lft forever 00:13:29.843 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:29.843 valid_lft forever preferred_lft forever 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:29.843 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:29.843 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:29.843 altname enp175s0f1np1 00:13:29.843 altname ens801f1np1 00:13:29.843 inet 192.168.100.9/24 scope global cvl_0_1 00:13:29.843 valid_lft forever preferred_lft forever 00:13:29.843 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:29.843 valid_lft forever preferred_lft forever 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:29.843 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:29.844 192.168.100.9' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:29.844 192.168.100.9' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:29.844 192.168.100.9' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:29.844 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1268752 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1268752 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # '[' -z 1268752 ']' 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:30.113 08:51:52 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.113 [2024-06-09 08:51:52.458080] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:30.113 [2024-06-09 08:51:52.458124] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:30.113 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.113 [2024-06-09 08:51:52.513063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:30.113 [2024-06-09 08:51:52.587869] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:30.113 [2024-06-09 08:51:52.587909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:30.113 [2024-06-09 08:51:52.587916] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:30.113 [2024-06-09 08:51:52.587921] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:30.113 [2024-06-09 08:51:52.587926] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:30.113 [2024-06-09 08:51:52.588036] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:13:30.113 [2024-06-09 08:51:52.588143] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:13:30.113 [2024-06-09 08:51:52.588144] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.716 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:30.716 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@863 -- # return 0 00:13:30.716 08:51:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:30.716 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:30.716 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.976 [2024-06-09 08:51:53.320533] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24530d0/0x2452710) succeed. 00:13:30.976 [2024-06-09 08:51:53.329443] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2454400/0x2452c90) succeed. 00:13:30.976 [2024-06-09 08:51:53.329465] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.976 [2024-06-09 08:51:53.349694] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.976 NULL1 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1268998 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:30.976 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.235 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.235 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:31.235 08:51:53 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.235 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.235 08:51:53 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.801 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:31.801 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:31.801 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.801 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:31.801 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.059 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.059 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:32.059 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.059 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.059 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.317 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.317 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:32.317 08:51:54 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.317 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.317 08:51:54 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.575 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:32.576 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:32.576 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.576 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:32.576 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.142 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.142 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:33.142 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.142 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.142 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.401 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.401 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:33.401 08:51:55 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.401 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.401 08:51:55 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.659 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.659 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:33.659 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.659 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.659 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.917 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.917 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:33.917 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.917 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.917 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.175 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.175 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:34.175 08:51:56 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.175 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.175 08:51:56 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.741 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.741 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:34.741 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.741 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:34.741 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.999 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:34.999 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:34.999 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.000 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.000 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.258 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.258 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:35.258 08:51:57 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.258 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.258 08:51:57 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.515 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:35.515 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:35.515 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.515 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:35.515 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.081 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.081 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:36.081 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.081 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.081 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.339 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.340 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:36.340 08:51:58 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.340 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.340 08:51:58 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.598 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.598 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:36.598 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.598 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.598 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.856 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:36.856 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:36.856 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.856 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:36.856 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.114 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.114 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:37.114 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.114 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.114 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.680 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.680 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:37.680 08:51:59 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.680 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.680 08:51:59 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.938 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.938 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:37.938 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.938 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.938 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.196 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.196 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:38.196 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.196 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.196 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.455 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:38.455 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:38.455 08:52:00 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.455 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:38.455 08:52:00 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.022 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:39.022 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:39.022 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.022 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:39.022 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.280 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:39.280 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:39.280 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.280 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:39.280 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.537 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:39.537 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:39.538 08:52:01 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.538 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:39.538 08:52:01 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.795 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:39.795 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:39.795 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.795 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:39.795 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.053 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:40.053 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:40.053 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.053 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:40.053 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.621 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:40.621 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:40.621 08:52:02 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.621 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:40.621 08:52:02 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.882 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:40.882 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:40.883 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.883 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:40.883 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.140 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1268998 00:13:41.140 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1268998) - No such process 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1268998 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:41.140 rmmod nvme_rdma 00:13:41.140 rmmod nvme_fabrics 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1268752 ']' 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1268752 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@949 -- # '[' -z 1268752 ']' 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@953 -- # kill -0 1268752 00:13:41.140 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # uname 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1268752 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1268752' 00:13:41.141 killing process with pid 1268752 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@968 -- # kill 1268752 00:13:41.141 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@973 -- # wait 1268752 00:13:41.399 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:41.399 08:52:03 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:41.399 00:13:41.399 real 0m16.899s 00:13:41.399 user 0m41.650s 00:13:41.399 sys 0m7.801s 00:13:41.399 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:41.399 08:52:03 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.399 ************************************ 00:13:41.399 END TEST nvmf_connect_stress 00:13:41.399 ************************************ 00:13:41.399 08:52:03 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:41.399 08:52:03 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:41.399 08:52:03 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:41.399 08:52:03 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:41.658 ************************************ 00:13:41.658 START TEST nvmf_fused_ordering 00:13:41.658 ************************************ 00:13:41.658 08:52:03 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:41.658 * Looking for test storage... 00:13:41.658 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.658 08:52:04 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:41.659 08:52:04 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:46.929 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:46.929 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@377 -- # modinfo irdma 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:46.929 Found net devices under 0000:af:00.0: cvl_0_0 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:46.929 Found net devices under 0000:af:00.1: cvl_0_1 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:46.929 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:46.930 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:46.930 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:46.930 altname enp175s0f0np0 00:13:46.930 altname ens801f0np0 00:13:46.930 inet 192.168.100.8/24 scope global cvl_0_0 00:13:46.930 valid_lft forever preferred_lft forever 00:13:46.930 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:46.930 valid_lft forever preferred_lft forever 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:46.930 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:46.930 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:46.930 altname enp175s0f1np1 00:13:46.930 altname ens801f1np1 00:13:46.930 inet 192.168.100.9/24 scope global cvl_0_1 00:13:46.930 valid_lft forever preferred_lft forever 00:13:46.930 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:46.930 valid_lft forever preferred_lft forever 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:46.930 192.168.100.9' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:46.930 192.168.100.9' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:46.930 192.168.100.9' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1273631 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1273631 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # '[' -z 1273631 ']' 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:46.930 08:52:08 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:46.930 [2024-06-09 08:52:08.743557] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:46.930 [2024-06-09 08:52:08.743599] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.930 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.930 [2024-06-09 08:52:08.799012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.930 [2024-06-09 08:52:08.875151] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:46.930 [2024-06-09 08:52:08.875184] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:46.930 [2024-06-09 08:52:08.875191] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:46.930 [2024-06-09 08:52:08.875197] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:46.930 [2024-06-09 08:52:08.875205] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:46.930 [2024-06-09 08:52:08.875227] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.189 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:47.189 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@863 -- # return 0 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 [2024-06-09 08:52:09.585269] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1eb3af0/0x1eb3130) succeed. 00:13:47.190 [2024-06-09 08:52:09.593649] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1eb4da0/0x1eb36b0) succeed. 00:13:47.190 [2024-06-09 08:52:09.593671] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 [2024-06-09 08:52:09.607057] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 NULL1 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:47.190 08:52:09 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:47.190 [2024-06-09 08:52:09.650073] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:47.190 [2024-06-09 08:52:09.650107] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273872 ] 00:13:47.190 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.449 Attached to nqn.2016-06.io.spdk:cnode1 00:13:47.449 Namespace ID: 1 size: 1GB 00:13:47.449 fused_ordering(0) 00:13:47.449 fused_ordering(1) 00:13:47.449 fused_ordering(2) 00:13:47.449 fused_ordering(3) 00:13:47.449 fused_ordering(4) 00:13:47.449 fused_ordering(5) 00:13:47.449 fused_ordering(6) 00:13:47.449 fused_ordering(7) 00:13:47.449 fused_ordering(8) 00:13:47.449 fused_ordering(9) 00:13:47.449 fused_ordering(10) 00:13:47.449 fused_ordering(11) 00:13:47.449 fused_ordering(12) 00:13:47.449 fused_ordering(13) 00:13:47.449 fused_ordering(14) 00:13:47.449 fused_ordering(15) 00:13:47.449 fused_ordering(16) 00:13:47.449 fused_ordering(17) 00:13:47.449 fused_ordering(18) 00:13:47.449 fused_ordering(19) 00:13:47.450 fused_ordering(20) 00:13:47.450 fused_ordering(21) 00:13:47.450 fused_ordering(22) 00:13:47.450 fused_ordering(23) 00:13:47.450 fused_ordering(24) 00:13:47.450 fused_ordering(25) 00:13:47.450 fused_ordering(26) 00:13:47.450 fused_ordering(27) 00:13:47.450 fused_ordering(28) 00:13:47.450 fused_ordering(29) 00:13:47.450 fused_ordering(30) 00:13:47.450 fused_ordering(31) 00:13:47.450 fused_ordering(32) 00:13:47.450 fused_ordering(33) 00:13:47.450 fused_ordering(34) 00:13:47.450 fused_ordering(35) 00:13:47.450 fused_ordering(36) 00:13:47.450 fused_ordering(37) 00:13:47.450 fused_ordering(38) 00:13:47.450 fused_ordering(39) 00:13:47.450 fused_ordering(40) 00:13:47.450 fused_ordering(41) 00:13:47.450 fused_ordering(42) 00:13:47.450 fused_ordering(43) 00:13:47.450 fused_ordering(44) 00:13:47.450 fused_ordering(45) 00:13:47.450 fused_ordering(46) 00:13:47.450 fused_ordering(47) 00:13:47.450 fused_ordering(48) 00:13:47.450 fused_ordering(49) 00:13:47.450 fused_ordering(50) 00:13:47.450 fused_ordering(51) 00:13:47.450 fused_ordering(52) 00:13:47.450 fused_ordering(53) 00:13:47.450 fused_ordering(54) 00:13:47.450 fused_ordering(55) 00:13:47.450 fused_ordering(56) 00:13:47.450 fused_ordering(57) 00:13:47.450 fused_ordering(58) 00:13:47.450 fused_ordering(59) 00:13:47.450 fused_ordering(60) 00:13:47.450 fused_ordering(61) 00:13:47.450 fused_ordering(62) 00:13:47.450 fused_ordering(63) 00:13:47.450 fused_ordering(64) 00:13:47.450 fused_ordering(65) 00:13:47.450 fused_ordering(66) 00:13:47.450 fused_ordering(67) 00:13:47.450 fused_ordering(68) 00:13:47.450 fused_ordering(69) 00:13:47.450 fused_ordering(70) 00:13:47.450 fused_ordering(71) 00:13:47.450 fused_ordering(72) 00:13:47.450 fused_ordering(73) 00:13:47.450 fused_ordering(74) 00:13:47.450 fused_ordering(75) 00:13:47.450 fused_ordering(76) 00:13:47.450 fused_ordering(77) 00:13:47.450 fused_ordering(78) 00:13:47.450 fused_ordering(79) 00:13:47.450 fused_ordering(80) 00:13:47.450 fused_ordering(81) 00:13:47.450 fused_ordering(82) 00:13:47.450 fused_ordering(83) 00:13:47.450 fused_ordering(84) 00:13:47.450 fused_ordering(85) 00:13:47.450 fused_ordering(86) 00:13:47.450 fused_ordering(87) 00:13:47.450 fused_ordering(88) 00:13:47.450 fused_ordering(89) 00:13:47.450 fused_ordering(90) 00:13:47.450 fused_ordering(91) 00:13:47.450 fused_ordering(92) 00:13:47.450 fused_ordering(93) 00:13:47.450 fused_ordering(94) 00:13:47.450 fused_ordering(95) 00:13:47.450 fused_ordering(96) 00:13:47.450 fused_ordering(97) 00:13:47.450 fused_ordering(98) 00:13:47.450 fused_ordering(99) 00:13:47.450 fused_ordering(100) 00:13:47.450 fused_ordering(101) 00:13:47.450 fused_ordering(102) 00:13:47.450 fused_ordering(103) 00:13:47.450 fused_ordering(104) 00:13:47.450 fused_ordering(105) 00:13:47.450 fused_ordering(106) 00:13:47.450 fused_ordering(107) 00:13:47.450 fused_ordering(108) 00:13:47.450 fused_ordering(109) 00:13:47.450 fused_ordering(110) 00:13:47.450 fused_ordering(111) 00:13:47.450 fused_ordering(112) 00:13:47.450 fused_ordering(113) 00:13:47.450 fused_ordering(114) 00:13:47.450 fused_ordering(115) 00:13:47.450 fused_ordering(116) 00:13:47.450 fused_ordering(117) 00:13:47.450 fused_ordering(118) 00:13:47.450 fused_ordering(119) 00:13:47.450 fused_ordering(120) 00:13:47.450 fused_ordering(121) 00:13:47.450 fused_ordering(122) 00:13:47.450 fused_ordering(123) 00:13:47.450 fused_ordering(124) 00:13:47.450 fused_ordering(125) 00:13:47.450 fused_ordering(126) 00:13:47.450 fused_ordering(127) 00:13:47.450 fused_ordering(128) 00:13:47.450 fused_ordering(129) 00:13:47.450 fused_ordering(130) 00:13:47.450 fused_ordering(131) 00:13:47.450 fused_ordering(132) 00:13:47.450 fused_ordering(133) 00:13:47.450 fused_ordering(134) 00:13:47.450 fused_ordering(135) 00:13:47.450 fused_ordering(136) 00:13:47.450 fused_ordering(137) 00:13:47.450 fused_ordering(138) 00:13:47.450 fused_ordering(139) 00:13:47.450 fused_ordering(140) 00:13:47.450 fused_ordering(141) 00:13:47.450 fused_ordering(142) 00:13:47.450 fused_ordering(143) 00:13:47.450 fused_ordering(144) 00:13:47.450 fused_ordering(145) 00:13:47.450 fused_ordering(146) 00:13:47.450 fused_ordering(147) 00:13:47.450 fused_ordering(148) 00:13:47.450 fused_ordering(149) 00:13:47.450 fused_ordering(150) 00:13:47.450 fused_ordering(151) 00:13:47.450 fused_ordering(152) 00:13:47.450 fused_ordering(153) 00:13:47.450 fused_ordering(154) 00:13:47.450 fused_ordering(155) 00:13:47.450 fused_ordering(156) 00:13:47.450 fused_ordering(157) 00:13:47.450 fused_ordering(158) 00:13:47.450 fused_ordering(159) 00:13:47.450 fused_ordering(160) 00:13:47.450 fused_ordering(161) 00:13:47.450 fused_ordering(162) 00:13:47.450 fused_ordering(163) 00:13:47.450 fused_ordering(164) 00:13:47.450 fused_ordering(165) 00:13:47.450 fused_ordering(166) 00:13:47.450 fused_ordering(167) 00:13:47.450 fused_ordering(168) 00:13:47.450 fused_ordering(169) 00:13:47.450 fused_ordering(170) 00:13:47.450 fused_ordering(171) 00:13:47.450 fused_ordering(172) 00:13:47.450 fused_ordering(173) 00:13:47.450 fused_ordering(174) 00:13:47.450 fused_ordering(175) 00:13:47.450 fused_ordering(176) 00:13:47.450 fused_ordering(177) 00:13:47.450 fused_ordering(178) 00:13:47.450 fused_ordering(179) 00:13:47.450 fused_ordering(180) 00:13:47.450 fused_ordering(181) 00:13:47.450 fused_ordering(182) 00:13:47.450 fused_ordering(183) 00:13:47.450 fused_ordering(184) 00:13:47.450 fused_ordering(185) 00:13:47.450 fused_ordering(186) 00:13:47.450 fused_ordering(187) 00:13:47.450 fused_ordering(188) 00:13:47.450 fused_ordering(189) 00:13:47.450 fused_ordering(190) 00:13:47.450 fused_ordering(191) 00:13:47.450 fused_ordering(192) 00:13:47.450 fused_ordering(193) 00:13:47.450 fused_ordering(194) 00:13:47.450 fused_ordering(195) 00:13:47.450 fused_ordering(196) 00:13:47.450 fused_ordering(197) 00:13:47.450 fused_ordering(198) 00:13:47.450 fused_ordering(199) 00:13:47.450 fused_ordering(200) 00:13:47.450 fused_ordering(201) 00:13:47.450 fused_ordering(202) 00:13:47.450 fused_ordering(203) 00:13:47.450 fused_ordering(204) 00:13:47.450 fused_ordering(205) 00:13:47.450 fused_ordering(206) 00:13:47.450 fused_ordering(207) 00:13:47.450 fused_ordering(208) 00:13:47.450 fused_ordering(209) 00:13:47.450 fused_ordering(210) 00:13:47.450 fused_ordering(211) 00:13:47.450 fused_ordering(212) 00:13:47.450 fused_ordering(213) 00:13:47.450 fused_ordering(214) 00:13:47.450 fused_ordering(215) 00:13:47.450 fused_ordering(216) 00:13:47.450 fused_ordering(217) 00:13:47.450 fused_ordering(218) 00:13:47.450 fused_ordering(219) 00:13:47.450 fused_ordering(220) 00:13:47.450 fused_ordering(221) 00:13:47.450 fused_ordering(222) 00:13:47.450 fused_ordering(223) 00:13:47.450 fused_ordering(224) 00:13:47.450 fused_ordering(225) 00:13:47.450 fused_ordering(226) 00:13:47.450 fused_ordering(227) 00:13:47.450 fused_ordering(228) 00:13:47.450 fused_ordering(229) 00:13:47.450 fused_ordering(230) 00:13:47.450 fused_ordering(231) 00:13:47.450 fused_ordering(232) 00:13:47.450 fused_ordering(233) 00:13:47.450 fused_ordering(234) 00:13:47.450 fused_ordering(235) 00:13:47.450 fused_ordering(236) 00:13:47.450 fused_ordering(237) 00:13:47.450 fused_ordering(238) 00:13:47.450 fused_ordering(239) 00:13:47.450 fused_ordering(240) 00:13:47.450 fused_ordering(241) 00:13:47.450 fused_ordering(242) 00:13:47.450 fused_ordering(243) 00:13:47.450 fused_ordering(244) 00:13:47.450 fused_ordering(245) 00:13:47.450 fused_ordering(246) 00:13:47.450 fused_ordering(247) 00:13:47.450 fused_ordering(248) 00:13:47.450 fused_ordering(249) 00:13:47.450 fused_ordering(250) 00:13:47.450 fused_ordering(251) 00:13:47.450 fused_ordering(252) 00:13:47.450 fused_ordering(253) 00:13:47.450 fused_ordering(254) 00:13:47.450 fused_ordering(255) 00:13:47.450 fused_ordering(256) 00:13:47.450 fused_ordering(257) 00:13:47.450 fused_ordering(258) 00:13:47.450 fused_ordering(259) 00:13:47.450 fused_ordering(260) 00:13:47.450 fused_ordering(261) 00:13:47.450 fused_ordering(262) 00:13:47.450 fused_ordering(263) 00:13:47.450 fused_ordering(264) 00:13:47.450 fused_ordering(265) 00:13:47.450 fused_ordering(266) 00:13:47.450 fused_ordering(267) 00:13:47.450 fused_ordering(268) 00:13:47.450 fused_ordering(269) 00:13:47.450 fused_ordering(270) 00:13:47.450 fused_ordering(271) 00:13:47.450 fused_ordering(272) 00:13:47.450 fused_ordering(273) 00:13:47.450 fused_ordering(274) 00:13:47.450 fused_ordering(275) 00:13:47.450 fused_ordering(276) 00:13:47.450 fused_ordering(277) 00:13:47.450 fused_ordering(278) 00:13:47.450 fused_ordering(279) 00:13:47.451 fused_ordering(280) 00:13:47.451 fused_ordering(281) 00:13:47.451 fused_ordering(282) 00:13:47.451 fused_ordering(283) 00:13:47.451 fused_ordering(284) 00:13:47.451 fused_ordering(285) 00:13:47.451 fused_ordering(286) 00:13:47.451 fused_ordering(287) 00:13:47.451 fused_ordering(288) 00:13:47.451 fused_ordering(289) 00:13:47.451 fused_ordering(290) 00:13:47.451 fused_ordering(291) 00:13:47.451 fused_ordering(292) 00:13:47.451 fused_ordering(293) 00:13:47.451 fused_ordering(294) 00:13:47.451 fused_ordering(295) 00:13:47.451 fused_ordering(296) 00:13:47.451 fused_ordering(297) 00:13:47.451 fused_ordering(298) 00:13:47.451 fused_ordering(299) 00:13:47.451 fused_ordering(300) 00:13:47.451 fused_ordering(301) 00:13:47.451 fused_ordering(302) 00:13:47.451 fused_ordering(303) 00:13:47.451 fused_ordering(304) 00:13:47.451 fused_ordering(305) 00:13:47.451 fused_ordering(306) 00:13:47.451 fused_ordering(307) 00:13:47.451 fused_ordering(308) 00:13:47.451 fused_ordering(309) 00:13:47.451 fused_ordering(310) 00:13:47.451 fused_ordering(311) 00:13:47.451 fused_ordering(312) 00:13:47.451 fused_ordering(313) 00:13:47.451 fused_ordering(314) 00:13:47.451 fused_ordering(315) 00:13:47.451 fused_ordering(316) 00:13:47.451 fused_ordering(317) 00:13:47.451 fused_ordering(318) 00:13:47.451 fused_ordering(319) 00:13:47.451 fused_ordering(320) 00:13:47.451 fused_ordering(321) 00:13:47.451 fused_ordering(322) 00:13:47.451 fused_ordering(323) 00:13:47.451 fused_ordering(324) 00:13:47.451 fused_ordering(325) 00:13:47.451 fused_ordering(326) 00:13:47.451 fused_ordering(327) 00:13:47.451 fused_ordering(328) 00:13:47.451 fused_ordering(329) 00:13:47.451 fused_ordering(330) 00:13:47.451 fused_ordering(331) 00:13:47.451 fused_ordering(332) 00:13:47.451 fused_ordering(333) 00:13:47.451 fused_ordering(334) 00:13:47.451 fused_ordering(335) 00:13:47.451 fused_ordering(336) 00:13:47.451 fused_ordering(337) 00:13:47.451 fused_ordering(338) 00:13:47.451 fused_ordering(339) 00:13:47.451 fused_ordering(340) 00:13:47.451 fused_ordering(341) 00:13:47.451 fused_ordering(342) 00:13:47.451 fused_ordering(343) 00:13:47.451 fused_ordering(344) 00:13:47.451 fused_ordering(345) 00:13:47.451 fused_ordering(346) 00:13:47.451 fused_ordering(347) 00:13:47.451 fused_ordering(348) 00:13:47.451 fused_ordering(349) 00:13:47.451 fused_ordering(350) 00:13:47.451 fused_ordering(351) 00:13:47.451 fused_ordering(352) 00:13:47.451 fused_ordering(353) 00:13:47.451 fused_ordering(354) 00:13:47.451 fused_ordering(355) 00:13:47.451 fused_ordering(356) 00:13:47.451 fused_ordering(357) 00:13:47.451 fused_ordering(358) 00:13:47.451 fused_ordering(359) 00:13:47.451 fused_ordering(360) 00:13:47.451 fused_ordering(361) 00:13:47.451 fused_ordering(362) 00:13:47.451 fused_ordering(363) 00:13:47.451 fused_ordering(364) 00:13:47.451 fused_ordering(365) 00:13:47.451 fused_ordering(366) 00:13:47.451 fused_ordering(367) 00:13:47.451 fused_ordering(368) 00:13:47.451 fused_ordering(369) 00:13:47.451 fused_ordering(370) 00:13:47.451 fused_ordering(371) 00:13:47.451 fused_ordering(372) 00:13:47.451 fused_ordering(373) 00:13:47.451 fused_ordering(374) 00:13:47.451 fused_ordering(375) 00:13:47.451 fused_ordering(376) 00:13:47.451 fused_ordering(377) 00:13:47.451 fused_ordering(378) 00:13:47.451 fused_ordering(379) 00:13:47.451 fused_ordering(380) 00:13:47.451 fused_ordering(381) 00:13:47.451 fused_ordering(382) 00:13:47.451 fused_ordering(383) 00:13:47.451 fused_ordering(384) 00:13:47.451 fused_ordering(385) 00:13:47.451 fused_ordering(386) 00:13:47.451 fused_ordering(387) 00:13:47.451 fused_ordering(388) 00:13:47.451 fused_ordering(389) 00:13:47.451 fused_ordering(390) 00:13:47.451 fused_ordering(391) 00:13:47.451 fused_ordering(392) 00:13:47.451 fused_ordering(393) 00:13:47.451 fused_ordering(394) 00:13:47.451 fused_ordering(395) 00:13:47.451 fused_ordering(396) 00:13:47.451 fused_ordering(397) 00:13:47.451 fused_ordering(398) 00:13:47.451 fused_ordering(399) 00:13:47.451 fused_ordering(400) 00:13:47.451 fused_ordering(401) 00:13:47.451 fused_ordering(402) 00:13:47.451 fused_ordering(403) 00:13:47.451 fused_ordering(404) 00:13:47.451 fused_ordering(405) 00:13:47.451 fused_ordering(406) 00:13:47.451 fused_ordering(407) 00:13:47.451 fused_ordering(408) 00:13:47.451 fused_ordering(409) 00:13:47.451 fused_ordering(410) 00:13:47.451 fused_ordering(411) 00:13:47.451 fused_ordering(412) 00:13:47.451 fused_ordering(413) 00:13:47.451 fused_ordering(414) 00:13:47.451 fused_ordering(415) 00:13:47.451 fused_ordering(416) 00:13:47.451 fused_ordering(417) 00:13:47.451 fused_ordering(418) 00:13:47.451 fused_ordering(419) 00:13:47.451 fused_ordering(420) 00:13:47.451 fused_ordering(421) 00:13:47.451 fused_ordering(422) 00:13:47.451 fused_ordering(423) 00:13:47.451 fused_ordering(424) 00:13:47.451 fused_ordering(425) 00:13:47.451 fused_ordering(426) 00:13:47.451 fused_ordering(427) 00:13:47.451 fused_ordering(428) 00:13:47.451 fused_ordering(429) 00:13:47.451 fused_ordering(430) 00:13:47.451 fused_ordering(431) 00:13:47.451 fused_ordering(432) 00:13:47.451 fused_ordering(433) 00:13:47.451 fused_ordering(434) 00:13:47.451 fused_ordering(435) 00:13:47.451 fused_ordering(436) 00:13:47.451 fused_ordering(437) 00:13:47.451 fused_ordering(438) 00:13:47.451 fused_ordering(439) 00:13:47.451 fused_ordering(440) 00:13:47.451 fused_ordering(441) 00:13:47.451 fused_ordering(442) 00:13:47.451 fused_ordering(443) 00:13:47.451 fused_ordering(444) 00:13:47.451 fused_ordering(445) 00:13:47.451 fused_ordering(446) 00:13:47.451 fused_ordering(447) 00:13:47.451 fused_ordering(448) 00:13:47.451 fused_ordering(449) 00:13:47.451 fused_ordering(450) 00:13:47.451 fused_ordering(451) 00:13:47.451 fused_ordering(452) 00:13:47.451 fused_ordering(453) 00:13:47.451 fused_ordering(454) 00:13:47.451 fused_ordering(455) 00:13:47.451 fused_ordering(456) 00:13:47.451 fused_ordering(457) 00:13:47.451 fused_ordering(458) 00:13:47.451 fused_ordering(459) 00:13:47.451 fused_ordering(460) 00:13:47.451 fused_ordering(461) 00:13:47.451 fused_ordering(462) 00:13:47.451 fused_ordering(463) 00:13:47.451 fused_ordering(464) 00:13:47.451 fused_ordering(465) 00:13:47.451 fused_ordering(466) 00:13:47.451 fused_ordering(467) 00:13:47.451 fused_ordering(468) 00:13:47.451 fused_ordering(469) 00:13:47.451 fused_ordering(470) 00:13:47.451 fused_ordering(471) 00:13:47.451 fused_ordering(472) 00:13:47.451 fused_ordering(473) 00:13:47.451 fused_ordering(474) 00:13:47.451 fused_ordering(475) 00:13:47.451 fused_ordering(476) 00:13:47.451 fused_ordering(477) 00:13:47.451 fused_ordering(478) 00:13:47.451 fused_ordering(479) 00:13:47.451 fused_ordering(480) 00:13:47.451 fused_ordering(481) 00:13:47.451 fused_ordering(482) 00:13:47.451 fused_ordering(483) 00:13:47.451 fused_ordering(484) 00:13:47.451 fused_ordering(485) 00:13:47.451 fused_ordering(486) 00:13:47.451 fused_ordering(487) 00:13:47.451 fused_ordering(488) 00:13:47.451 fused_ordering(489) 00:13:47.451 fused_ordering(490) 00:13:47.451 fused_ordering(491) 00:13:47.451 fused_ordering(492) 00:13:47.451 fused_ordering(493) 00:13:47.451 fused_ordering(494) 00:13:47.451 fused_ordering(495) 00:13:47.451 fused_ordering(496) 00:13:47.451 fused_ordering(497) 00:13:47.451 fused_ordering(498) 00:13:47.451 fused_ordering(499) 00:13:47.451 fused_ordering(500) 00:13:47.451 fused_ordering(501) 00:13:47.451 fused_ordering(502) 00:13:47.451 fused_ordering(503) 00:13:47.451 fused_ordering(504) 00:13:47.451 fused_ordering(505) 00:13:47.451 fused_ordering(506) 00:13:47.451 fused_ordering(507) 00:13:47.451 fused_ordering(508) 00:13:47.451 fused_ordering(509) 00:13:47.451 fused_ordering(510) 00:13:47.451 fused_ordering(511) 00:13:47.451 fused_ordering(512) 00:13:47.451 fused_ordering(513) 00:13:47.451 fused_ordering(514) 00:13:47.451 fused_ordering(515) 00:13:47.451 fused_ordering(516) 00:13:47.451 fused_ordering(517) 00:13:47.451 fused_ordering(518) 00:13:47.451 fused_ordering(519) 00:13:47.451 fused_ordering(520) 00:13:47.451 fused_ordering(521) 00:13:47.451 fused_ordering(522) 00:13:47.451 fused_ordering(523) 00:13:47.451 fused_ordering(524) 00:13:47.451 fused_ordering(525) 00:13:47.451 fused_ordering(526) 00:13:47.451 fused_ordering(527) 00:13:47.451 fused_ordering(528) 00:13:47.451 fused_ordering(529) 00:13:47.451 fused_ordering(530) 00:13:47.451 fused_ordering(531) 00:13:47.451 fused_ordering(532) 00:13:47.451 fused_ordering(533) 00:13:47.451 fused_ordering(534) 00:13:47.451 fused_ordering(535) 00:13:47.451 fused_ordering(536) 00:13:47.451 fused_ordering(537) 00:13:47.451 fused_ordering(538) 00:13:47.451 fused_ordering(539) 00:13:47.451 fused_ordering(540) 00:13:47.451 fused_ordering(541) 00:13:47.451 fused_ordering(542) 00:13:47.451 fused_ordering(543) 00:13:47.451 fused_ordering(544) 00:13:47.451 fused_ordering(545) 00:13:47.451 fused_ordering(546) 00:13:47.451 fused_ordering(547) 00:13:47.451 fused_ordering(548) 00:13:47.452 fused_ordering(549) 00:13:47.452 fused_ordering(550) 00:13:47.452 fused_ordering(551) 00:13:47.452 fused_ordering(552) 00:13:47.452 fused_ordering(553) 00:13:47.452 fused_ordering(554) 00:13:47.452 fused_ordering(555) 00:13:47.452 fused_ordering(556) 00:13:47.452 fused_ordering(557) 00:13:47.452 fused_ordering(558) 00:13:47.452 fused_ordering(559) 00:13:47.452 fused_ordering(560) 00:13:47.452 fused_ordering(561) 00:13:47.452 fused_ordering(562) 00:13:47.452 fused_ordering(563) 00:13:47.452 fused_ordering(564) 00:13:47.452 fused_ordering(565) 00:13:47.452 fused_ordering(566) 00:13:47.452 fused_ordering(567) 00:13:47.452 fused_ordering(568) 00:13:47.452 fused_ordering(569) 00:13:47.452 fused_ordering(570) 00:13:47.452 fused_ordering(571) 00:13:47.452 fused_ordering(572) 00:13:47.452 fused_ordering(573) 00:13:47.452 fused_ordering(574) 00:13:47.452 fused_ordering(575) 00:13:47.452 fused_ordering(576) 00:13:47.452 fused_ordering(577) 00:13:47.452 fused_ordering(578) 00:13:47.452 fused_ordering(579) 00:13:47.452 fused_ordering(580) 00:13:47.452 fused_ordering(581) 00:13:47.452 fused_ordering(582) 00:13:47.452 fused_ordering(583) 00:13:47.452 fused_ordering(584) 00:13:47.452 fused_ordering(585) 00:13:47.452 fused_ordering(586) 00:13:47.452 fused_ordering(587) 00:13:47.452 fused_ordering(588) 00:13:47.452 fused_ordering(589) 00:13:47.452 fused_ordering(590) 00:13:47.452 fused_ordering(591) 00:13:47.452 fused_ordering(592) 00:13:47.452 fused_ordering(593) 00:13:47.452 fused_ordering(594) 00:13:47.452 fused_ordering(595) 00:13:47.452 fused_ordering(596) 00:13:47.452 fused_ordering(597) 00:13:47.452 fused_ordering(598) 00:13:47.452 fused_ordering(599) 00:13:47.452 fused_ordering(600) 00:13:47.452 fused_ordering(601) 00:13:47.452 fused_ordering(602) 00:13:47.452 fused_ordering(603) 00:13:47.452 fused_ordering(604) 00:13:47.452 fused_ordering(605) 00:13:47.452 fused_ordering(606) 00:13:47.452 fused_ordering(607) 00:13:47.452 fused_ordering(608) 00:13:47.452 fused_ordering(609) 00:13:47.452 fused_ordering(610) 00:13:47.452 fused_ordering(611) 00:13:47.452 fused_ordering(612) 00:13:47.452 fused_ordering(613) 00:13:47.452 fused_ordering(614) 00:13:47.452 fused_ordering(615) 00:13:47.711 fused_ordering(616) 00:13:47.711 fused_ordering(617) 00:13:47.711 fused_ordering(618) 00:13:47.711 fused_ordering(619) 00:13:47.711 fused_ordering(620) 00:13:47.711 fused_ordering(621) 00:13:47.711 fused_ordering(622) 00:13:47.711 fused_ordering(623) 00:13:47.711 fused_ordering(624) 00:13:47.711 fused_ordering(625) 00:13:47.711 fused_ordering(626) 00:13:47.711 fused_ordering(627) 00:13:47.711 fused_ordering(628) 00:13:47.711 fused_ordering(629) 00:13:47.711 fused_ordering(630) 00:13:47.711 fused_ordering(631) 00:13:47.711 fused_ordering(632) 00:13:47.711 fused_ordering(633) 00:13:47.711 fused_ordering(634) 00:13:47.711 fused_ordering(635) 00:13:47.711 fused_ordering(636) 00:13:47.711 fused_ordering(637) 00:13:47.711 fused_ordering(638) 00:13:47.711 fused_ordering(639) 00:13:47.711 fused_ordering(640) 00:13:47.711 fused_ordering(641) 00:13:47.711 fused_ordering(642) 00:13:47.711 fused_ordering(643) 00:13:47.711 fused_ordering(644) 00:13:47.711 fused_ordering(645) 00:13:47.711 fused_ordering(646) 00:13:47.711 fused_ordering(647) 00:13:47.711 fused_ordering(648) 00:13:47.711 fused_ordering(649) 00:13:47.711 fused_ordering(650) 00:13:47.711 fused_ordering(651) 00:13:47.711 fused_ordering(652) 00:13:47.711 fused_ordering(653) 00:13:47.711 fused_ordering(654) 00:13:47.711 fused_ordering(655) 00:13:47.711 fused_ordering(656) 00:13:47.711 fused_ordering(657) 00:13:47.711 fused_ordering(658) 00:13:47.711 fused_ordering(659) 00:13:47.711 fused_ordering(660) 00:13:47.711 fused_ordering(661) 00:13:47.711 fused_ordering(662) 00:13:47.711 fused_ordering(663) 00:13:47.711 fused_ordering(664) 00:13:47.711 fused_ordering(665) 00:13:47.711 fused_ordering(666) 00:13:47.711 fused_ordering(667) 00:13:47.711 fused_ordering(668) 00:13:47.711 fused_ordering(669) 00:13:47.711 fused_ordering(670) 00:13:47.711 fused_ordering(671) 00:13:47.711 fused_ordering(672) 00:13:47.711 fused_ordering(673) 00:13:47.711 fused_ordering(674) 00:13:47.711 fused_ordering(675) 00:13:47.711 fused_ordering(676) 00:13:47.711 fused_ordering(677) 00:13:47.711 fused_ordering(678) 00:13:47.711 fused_ordering(679) 00:13:47.711 fused_ordering(680) 00:13:47.711 fused_ordering(681) 00:13:47.711 fused_ordering(682) 00:13:47.711 fused_ordering(683) 00:13:47.711 fused_ordering(684) 00:13:47.711 fused_ordering(685) 00:13:47.711 fused_ordering(686) 00:13:47.711 fused_ordering(687) 00:13:47.711 fused_ordering(688) 00:13:47.711 fused_ordering(689) 00:13:47.711 fused_ordering(690) 00:13:47.711 fused_ordering(691) 00:13:47.711 fused_ordering(692) 00:13:47.711 fused_ordering(693) 00:13:47.711 fused_ordering(694) 00:13:47.711 fused_ordering(695) 00:13:47.711 fused_ordering(696) 00:13:47.711 fused_ordering(697) 00:13:47.711 fused_ordering(698) 00:13:47.711 fused_ordering(699) 00:13:47.711 fused_ordering(700) 00:13:47.711 fused_ordering(701) 00:13:47.711 fused_ordering(702) 00:13:47.711 fused_ordering(703) 00:13:47.711 fused_ordering(704) 00:13:47.711 fused_ordering(705) 00:13:47.711 fused_ordering(706) 00:13:47.711 fused_ordering(707) 00:13:47.711 fused_ordering(708) 00:13:47.711 fused_ordering(709) 00:13:47.711 fused_ordering(710) 00:13:47.711 fused_ordering(711) 00:13:47.711 fused_ordering(712) 00:13:47.711 fused_ordering(713) 00:13:47.711 fused_ordering(714) 00:13:47.711 fused_ordering(715) 00:13:47.711 fused_ordering(716) 00:13:47.711 fused_ordering(717) 00:13:47.711 fused_ordering(718) 00:13:47.711 fused_ordering(719) 00:13:47.711 fused_ordering(720) 00:13:47.711 fused_ordering(721) 00:13:47.711 fused_ordering(722) 00:13:47.711 fused_ordering(723) 00:13:47.711 fused_ordering(724) 00:13:47.711 fused_ordering(725) 00:13:47.711 fused_ordering(726) 00:13:47.711 fused_ordering(727) 00:13:47.711 fused_ordering(728) 00:13:47.711 fused_ordering(729) 00:13:47.711 fused_ordering(730) 00:13:47.711 fused_ordering(731) 00:13:47.711 fused_ordering(732) 00:13:47.711 fused_ordering(733) 00:13:47.711 fused_ordering(734) 00:13:47.711 fused_ordering(735) 00:13:47.711 fused_ordering(736) 00:13:47.711 fused_ordering(737) 00:13:47.711 fused_ordering(738) 00:13:47.711 fused_ordering(739) 00:13:47.711 fused_ordering(740) 00:13:47.711 fused_ordering(741) 00:13:47.711 fused_ordering(742) 00:13:47.711 fused_ordering(743) 00:13:47.711 fused_ordering(744) 00:13:47.711 fused_ordering(745) 00:13:47.711 fused_ordering(746) 00:13:47.711 fused_ordering(747) 00:13:47.711 fused_ordering(748) 00:13:47.711 fused_ordering(749) 00:13:47.711 fused_ordering(750) 00:13:47.711 fused_ordering(751) 00:13:47.711 fused_ordering(752) 00:13:47.711 fused_ordering(753) 00:13:47.711 fused_ordering(754) 00:13:47.711 fused_ordering(755) 00:13:47.711 fused_ordering(756) 00:13:47.711 fused_ordering(757) 00:13:47.711 fused_ordering(758) 00:13:47.711 fused_ordering(759) 00:13:47.711 fused_ordering(760) 00:13:47.711 fused_ordering(761) 00:13:47.711 fused_ordering(762) 00:13:47.711 fused_ordering(763) 00:13:47.711 fused_ordering(764) 00:13:47.711 fused_ordering(765) 00:13:47.711 fused_ordering(766) 00:13:47.711 fused_ordering(767) 00:13:47.711 fused_ordering(768) 00:13:47.711 fused_ordering(769) 00:13:47.711 fused_ordering(770) 00:13:47.711 fused_ordering(771) 00:13:47.711 fused_ordering(772) 00:13:47.711 fused_ordering(773) 00:13:47.711 fused_ordering(774) 00:13:47.711 fused_ordering(775) 00:13:47.711 fused_ordering(776) 00:13:47.711 fused_ordering(777) 00:13:47.711 fused_ordering(778) 00:13:47.711 fused_ordering(779) 00:13:47.711 fused_ordering(780) 00:13:47.711 fused_ordering(781) 00:13:47.711 fused_ordering(782) 00:13:47.711 fused_ordering(783) 00:13:47.711 fused_ordering(784) 00:13:47.711 fused_ordering(785) 00:13:47.711 fused_ordering(786) 00:13:47.711 fused_ordering(787) 00:13:47.711 fused_ordering(788) 00:13:47.711 fused_ordering(789) 00:13:47.711 fused_ordering(790) 00:13:47.711 fused_ordering(791) 00:13:47.711 fused_ordering(792) 00:13:47.711 fused_ordering(793) 00:13:47.711 fused_ordering(794) 00:13:47.711 fused_ordering(795) 00:13:47.711 fused_ordering(796) 00:13:47.711 fused_ordering(797) 00:13:47.711 fused_ordering(798) 00:13:47.711 fused_ordering(799) 00:13:47.711 fused_ordering(800) 00:13:47.711 fused_ordering(801) 00:13:47.711 fused_ordering(802) 00:13:47.711 fused_ordering(803) 00:13:47.711 fused_ordering(804) 00:13:47.711 fused_ordering(805) 00:13:47.711 fused_ordering(806) 00:13:47.711 fused_ordering(807) 00:13:47.711 fused_ordering(808) 00:13:47.711 fused_ordering(809) 00:13:47.711 fused_ordering(810) 00:13:47.711 fused_ordering(811) 00:13:47.711 fused_ordering(812) 00:13:47.711 fused_ordering(813) 00:13:47.711 fused_ordering(814) 00:13:47.711 fused_ordering(815) 00:13:47.711 fused_ordering(816) 00:13:47.711 fused_ordering(817) 00:13:47.711 fused_ordering(818) 00:13:47.711 fused_ordering(819) 00:13:47.711 fused_ordering(820) 00:13:47.970 fused_ordering(821) 00:13:47.970 fused_ordering(822) 00:13:47.970 fused_ordering(823) 00:13:47.970 fused_ordering(824) 00:13:47.970 fused_ordering(825) 00:13:47.970 fused_ordering(826) 00:13:47.970 fused_ordering(827) 00:13:47.970 fused_ordering(828) 00:13:47.970 fused_ordering(829) 00:13:47.970 fused_ordering(830) 00:13:47.970 fused_ordering(831) 00:13:47.970 fused_ordering(832) 00:13:47.970 fused_ordering(833) 00:13:47.970 fused_ordering(834) 00:13:47.970 fused_ordering(835) 00:13:47.970 fused_ordering(836) 00:13:47.970 fused_ordering(837) 00:13:47.970 fused_ordering(838) 00:13:47.970 fused_ordering(839) 00:13:47.970 fused_ordering(840) 00:13:47.970 fused_ordering(841) 00:13:47.970 fused_ordering(842) 00:13:47.970 fused_ordering(843) 00:13:47.970 fused_ordering(844) 00:13:47.970 fused_ordering(845) 00:13:47.970 fused_ordering(846) 00:13:47.970 fused_ordering(847) 00:13:47.970 fused_ordering(848) 00:13:47.970 fused_ordering(849) 00:13:47.970 fused_ordering(850) 00:13:47.970 fused_ordering(851) 00:13:47.970 fused_ordering(852) 00:13:47.971 fused_ordering(853) 00:13:47.971 fused_ordering(854) 00:13:47.971 fused_ordering(855) 00:13:47.971 fused_ordering(856) 00:13:47.971 fused_ordering(857) 00:13:47.971 fused_ordering(858) 00:13:47.971 fused_ordering(859) 00:13:47.971 fused_ordering(860) 00:13:47.971 fused_ordering(861) 00:13:47.971 fused_ordering(862) 00:13:47.971 fused_ordering(863) 00:13:47.971 fused_ordering(864) 00:13:47.971 fused_ordering(865) 00:13:47.971 fused_ordering(866) 00:13:47.971 fused_ordering(867) 00:13:47.971 fused_ordering(868) 00:13:47.971 fused_ordering(869) 00:13:47.971 fused_ordering(870) 00:13:47.971 fused_ordering(871) 00:13:47.971 fused_ordering(872) 00:13:47.971 fused_ordering(873) 00:13:47.971 fused_ordering(874) 00:13:47.971 fused_ordering(875) 00:13:47.971 fused_ordering(876) 00:13:47.971 fused_ordering(877) 00:13:47.971 fused_ordering(878) 00:13:47.971 fused_ordering(879) 00:13:47.971 fused_ordering(880) 00:13:47.971 fused_ordering(881) 00:13:47.971 fused_ordering(882) 00:13:47.971 fused_ordering(883) 00:13:47.971 fused_ordering(884) 00:13:47.971 fused_ordering(885) 00:13:47.971 fused_ordering(886) 00:13:47.971 fused_ordering(887) 00:13:47.971 fused_ordering(888) 00:13:47.971 fused_ordering(889) 00:13:47.971 fused_ordering(890) 00:13:47.971 fused_ordering(891) 00:13:47.971 fused_ordering(892) 00:13:47.971 fused_ordering(893) 00:13:47.971 fused_ordering(894) 00:13:47.971 fused_ordering(895) 00:13:47.971 fused_ordering(896) 00:13:47.971 fused_ordering(897) 00:13:47.971 fused_ordering(898) 00:13:47.971 fused_ordering(899) 00:13:47.971 fused_ordering(900) 00:13:47.971 fused_ordering(901) 00:13:47.971 fused_ordering(902) 00:13:47.971 fused_ordering(903) 00:13:47.971 fused_ordering(904) 00:13:47.971 fused_ordering(905) 00:13:47.971 fused_ordering(906) 00:13:47.971 fused_ordering(907) 00:13:47.971 fused_ordering(908) 00:13:47.971 fused_ordering(909) 00:13:47.971 fused_ordering(910) 00:13:47.971 fused_ordering(911) 00:13:47.971 fused_ordering(912) 00:13:47.971 fused_ordering(913) 00:13:47.971 fused_ordering(914) 00:13:47.971 fused_ordering(915) 00:13:47.971 fused_ordering(916) 00:13:47.971 fused_ordering(917) 00:13:47.971 fused_ordering(918) 00:13:47.971 fused_ordering(919) 00:13:47.971 fused_ordering(920) 00:13:47.971 fused_ordering(921) 00:13:47.971 fused_ordering(922) 00:13:47.971 fused_ordering(923) 00:13:47.971 fused_ordering(924) 00:13:47.971 fused_ordering(925) 00:13:47.971 fused_ordering(926) 00:13:47.971 fused_ordering(927) 00:13:47.971 fused_ordering(928) 00:13:47.971 fused_ordering(929) 00:13:47.971 fused_ordering(930) 00:13:47.971 fused_ordering(931) 00:13:47.971 fused_ordering(932) 00:13:47.971 fused_ordering(933) 00:13:47.971 fused_ordering(934) 00:13:47.971 fused_ordering(935) 00:13:47.971 fused_ordering(936) 00:13:47.971 fused_ordering(937) 00:13:47.971 fused_ordering(938) 00:13:47.971 fused_ordering(939) 00:13:47.971 fused_ordering(940) 00:13:47.971 fused_ordering(941) 00:13:47.971 fused_ordering(942) 00:13:47.971 fused_ordering(943) 00:13:47.971 fused_ordering(944) 00:13:47.971 fused_ordering(945) 00:13:47.971 fused_ordering(946) 00:13:47.971 fused_ordering(947) 00:13:47.971 fused_ordering(948) 00:13:47.971 fused_ordering(949) 00:13:47.971 fused_ordering(950) 00:13:47.971 fused_ordering(951) 00:13:47.971 fused_ordering(952) 00:13:47.971 fused_ordering(953) 00:13:47.971 fused_ordering(954) 00:13:47.971 fused_ordering(955) 00:13:47.971 fused_ordering(956) 00:13:47.971 fused_ordering(957) 00:13:47.971 fused_ordering(958) 00:13:47.971 fused_ordering(959) 00:13:47.971 fused_ordering(960) 00:13:47.971 fused_ordering(961) 00:13:47.971 fused_ordering(962) 00:13:47.971 fused_ordering(963) 00:13:47.971 fused_ordering(964) 00:13:47.971 fused_ordering(965) 00:13:47.971 fused_ordering(966) 00:13:47.971 fused_ordering(967) 00:13:47.971 fused_ordering(968) 00:13:47.971 fused_ordering(969) 00:13:47.971 fused_ordering(970) 00:13:47.971 fused_ordering(971) 00:13:47.971 fused_ordering(972) 00:13:47.971 fused_ordering(973) 00:13:47.971 fused_ordering(974) 00:13:47.971 fused_ordering(975) 00:13:47.971 fused_ordering(976) 00:13:47.971 fused_ordering(977) 00:13:47.971 fused_ordering(978) 00:13:47.971 fused_ordering(979) 00:13:47.971 fused_ordering(980) 00:13:47.971 fused_ordering(981) 00:13:47.971 fused_ordering(982) 00:13:47.971 fused_ordering(983) 00:13:47.971 fused_ordering(984) 00:13:47.971 fused_ordering(985) 00:13:47.971 fused_ordering(986) 00:13:47.971 fused_ordering(987) 00:13:47.971 fused_ordering(988) 00:13:47.971 fused_ordering(989) 00:13:47.971 fused_ordering(990) 00:13:47.971 fused_ordering(991) 00:13:47.971 fused_ordering(992) 00:13:47.971 fused_ordering(993) 00:13:47.971 fused_ordering(994) 00:13:47.971 fused_ordering(995) 00:13:47.971 fused_ordering(996) 00:13:47.971 fused_ordering(997) 00:13:47.971 fused_ordering(998) 00:13:47.971 fused_ordering(999) 00:13:47.971 fused_ordering(1000) 00:13:47.971 fused_ordering(1001) 00:13:47.971 fused_ordering(1002) 00:13:47.971 fused_ordering(1003) 00:13:47.971 fused_ordering(1004) 00:13:47.971 fused_ordering(1005) 00:13:47.971 fused_ordering(1006) 00:13:47.971 fused_ordering(1007) 00:13:47.971 fused_ordering(1008) 00:13:47.971 fused_ordering(1009) 00:13:47.971 fused_ordering(1010) 00:13:47.971 fused_ordering(1011) 00:13:47.971 fused_ordering(1012) 00:13:47.971 fused_ordering(1013) 00:13:47.971 fused_ordering(1014) 00:13:47.971 fused_ordering(1015) 00:13:47.971 fused_ordering(1016) 00:13:47.971 fused_ordering(1017) 00:13:47.971 fused_ordering(1018) 00:13:47.971 fused_ordering(1019) 00:13:47.971 fused_ordering(1020) 00:13:47.971 fused_ordering(1021) 00:13:47.971 fused_ordering(1022) 00:13:47.971 fused_ordering(1023) 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:47.971 rmmod nvme_rdma 00:13:47.971 rmmod nvme_fabrics 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1273631 ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1273631 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@949 -- # '[' -z 1273631 ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # kill -0 1273631 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # uname 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1273631 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1273631' 00:13:47.971 killing process with pid 1273631 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # kill 1273631 00:13:47.971 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # wait 1273631 00:13:48.231 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:48.231 08:52:10 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:48.231 00:13:48.231 real 0m6.620s 00:13:48.231 user 0m3.855s 00:13:48.231 sys 0m3.844s 00:13:48.231 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # xtrace_disable 00:13:48.231 08:52:10 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.231 ************************************ 00:13:48.231 END TEST nvmf_fused_ordering 00:13:48.231 ************************************ 00:13:48.231 08:52:10 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:13:48.231 08:52:10 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:13:48.231 08:52:10 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:13:48.231 08:52:10 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:48.231 ************************************ 00:13:48.231 START TEST nvmf_delete_subsystem 00:13:48.231 ************************************ 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:13:48.231 * Looking for test storage... 00:13:48.231 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.231 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.232 08:52:10 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:53.502 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:53.502 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # modinfo irdma 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.502 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:53.503 Found net devices under 0000:af:00.0: cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:53.503 Found net devices under 0000:af:00.1: cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:13:53.503 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:53.503 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:13:53.503 altname enp175s0f0np0 00:13:53.503 altname ens801f0np0 00:13:53.503 inet 192.168.100.8/24 scope global cvl_0_0 00:13:53.503 valid_lft forever preferred_lft forever 00:13:53.503 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:13:53.503 valid_lft forever preferred_lft forever 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:13:53.503 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:13:53.503 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:13:53.503 altname enp175s0f1np1 00:13:53.503 altname ens801f1np1 00:13:53.503 inet 192.168.100.9/24 scope global cvl_0_1 00:13:53.503 valid_lft forever preferred_lft forever 00:13:53.503 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:13:53.503 valid_lft forever preferred_lft forever 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:53.503 192.168.100.9' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:53.503 192.168.100.9' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:53.503 192.168.100.9' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@723 -- # xtrace_disable 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1276909 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1276909 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # '[' -z 1276909 ']' 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.503 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local max_retries=100 00:13:53.504 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.504 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # xtrace_disable 00:13:53.504 08:52:15 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:53.504 [2024-06-09 08:52:15.740765] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:13:53.504 [2024-06-09 08:52:15.740812] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.504 [2024-06-09 08:52:15.794230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.504 [2024-06-09 08:52:15.864002] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.504 [2024-06-09 08:52:15.864040] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.504 [2024-06-09 08:52:15.864047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.504 [2024-06-09 08:52:15.864053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.504 [2024-06-09 08:52:15.864057] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.504 [2024-06-09 08:52:15.864163] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.504 [2024-06-09 08:52:15.864166] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@863 -- # return 0 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@729 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 [2024-06-09 08:52:16.579713] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x5482d0/0x547910) succeed. 00:13:54.081 [2024-06-09 08:52:16.588336] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x549580/0x547e90) succeed. 00:13:54.081 [2024-06-09 08:52:16.588357] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 [2024-06-09 08:52:16.604546] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 NULL1 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.081 Delay0 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:54.081 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:54.409 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:54.409 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1277145 00:13:54.409 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:54.409 08:52:16 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:54.409 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.409 [2024-06-09 08:52:16.688806] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:56.311 08:52:18 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.311 08:52:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.311 08:52:18 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.879 [2024-06-09 08:52:19.188769] nvme_rdma.c:2494:nvme_rdma_log_wc_status: *ERROR*: WC error, qid 2, qp state 1, request 0x35184374496496 type 1, status: (12): transport retry counter exceeded 00:13:56.879 NVMe io qpair process completion error 00:13:56.879 NVMe io qpair process completion error 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Write completed with error (sct=0, sc=8) 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:56.879 starting I/O failed: -6 00:13:56.879 Read completed with error (sct=0, sc=8) 00:13:57.447 [2024-06-09 08:52:19.760743] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:57.447 Write completed with error (sct=0, sc=8) 00:13:57.447 Write completed with error (sct=0, sc=8) 00:13:57.447 Read completed with error (sct=0, sc=8) 00:13:57.447 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 [2024-06-09 08:52:19.761160] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 starting I/O failed: -6 00:13:57.448 [2024-06-09 08:52:19.761741] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Write completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 Read completed with error (sct=0, sc=8) 00:13:57.448 08:52:19 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:57.448 08:52:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:57.448 08:52:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1277145 00:13:57.448 08:52:19 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:58.016 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:58.016 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1277145 00:13:58.016 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:58.274 NVMe io qpair process completion error 00:13:58.274 NVMe io qpair process completion error 00:13:58.274 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:58.274 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1277145 00:13:58.274 08:52:20 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:58.843 [2024-06-09 08:52:21.296741] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 [2024-06-09 08:52:21.297108] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 [2024-06-09 08:52:21.297517] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 [2024-06-09 08:52:21.297856] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Read completed with error (sct=0, sc=8) 00:13:58.843 Write completed with error (sct=0, sc=8) 00:13:58.843 Initializing NVMe Controllers 00:13:58.843 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.843 Controller IO queue size 128, less than required. 00:13:58.843 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.843 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:58.843 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:58.843 Initialization complete. Launching workers. 00:13:58.843 ======================================================== 00:13:58.843 Latency(us) 00:13:58.843 Device Information : IOPS MiB/s Average min max 00:13:58.843 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 141.90 0.07 1322645.94 418024.77 2521202.73 00:13:58.843 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 141.90 0.07 1362400.74 982729.87 2518743.20 00:13:58.843 ======================================================== 00:13:58.843 Total : 283.81 0.14 1342523.34 418024.77 2521202.73 00:13:58.843 00:13:58.843 [2024-06-09 08:52:21.298731] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:13:58.843 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:58.843 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1277145 00:13:58.844 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:58.844 [2024-06-09 08:52:21.311387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:13:58.844 [2024-06-09 08:52:21.311404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:13:58.844 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1277145 00:13:59.411 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1277145) - No such process 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1277145 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1277145 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1277145 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.411 [2024-06-09 08:52:21.824628] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1278056 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:13:59.411 08:52:21 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.411 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.411 [2024-06-09 08:52:21.905740] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:59.980 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.980 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:13:59.980 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.547 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.547 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:00.547 08:52:22 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.805 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.805 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:00.805 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.372 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.372 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:01.372 08:52:23 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.938 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.938 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:01.938 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.506 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.506 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:02.506 08:52:24 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.073 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.073 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:03.073 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.332 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.332 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:03.332 08:52:25 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.899 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.899 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:03.899 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:04.467 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:04.467 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:04.467 08:52:26 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.034 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.034 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:05.034 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.602 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.602 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:05.602 08:52:27 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:05.860 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:05.860 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:05.860 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.427 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:06.427 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:06.427 08:52:28 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:06.685 Initializing NVMe Controllers 00:14:06.685 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:14:06.685 Controller IO queue size 128, less than required. 00:14:06.685 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:06.685 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:06.685 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:06.685 Initialization complete. Launching workers. 00:14:06.685 ======================================================== 00:14:06.685 Latency(us) 00:14:06.685 Device Information : IOPS MiB/s Average min max 00:14:06.685 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001492.91 1000064.20 1004200.98 00:14:06.685 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002783.29 1000067.59 1006367.97 00:14:06.685 ======================================================== 00:14:06.685 Total : 256.00 0.12 1002138.10 1000064.20 1006367.97 00:14:06.685 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1278056 00:14:06.942 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1278056) - No such process 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1278056 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:06.942 rmmod nvme_rdma 00:14:06.942 rmmod nvme_fabrics 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1276909 ']' 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1276909 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@949 -- # '[' -z 1276909 ']' 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # kill -0 1276909 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # uname 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:06.942 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1276909 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1276909' 00:14:07.199 killing process with pid 1276909 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # kill 1276909 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # wait 1276909 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:07.199 00:14:07.199 real 0m19.070s 00:14:07.199 user 0m51.666s 00:14:07.199 sys 0m4.722s 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:07.199 08:52:29 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:07.199 ************************************ 00:14:07.199 END TEST nvmf_delete_subsystem 00:14:07.199 ************************************ 00:14:07.199 08:52:29 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:14:07.199 08:52:29 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:07.199 08:52:29 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:07.199 08:52:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:07.458 ************************************ 00:14:07.458 START TEST nvmf_ns_masking 00:14:07.458 ************************************ 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:14:07.458 * Looking for test storage... 00:14:07.458 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.458 08:52:29 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=cbd6680e-ced6-44b0-94eb-5cbfeb8c79c0 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.459 08:52:29 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:12.725 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:12.725 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@377 -- # modinfo irdma 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:12.725 Found net devices under 0000:af:00.0: cvl_0_0 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:12.725 Found net devices under 0000:af:00.1: cvl_0_1 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:12.725 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:12.726 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:12.726 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:12.726 altname enp175s0f0np0 00:14:12.726 altname ens801f0np0 00:14:12.726 inet 192.168.100.8/24 scope global cvl_0_0 00:14:12.726 valid_lft forever preferred_lft forever 00:14:12.726 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:12.726 valid_lft forever preferred_lft forever 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:12.726 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:12.726 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:12.726 altname enp175s0f1np1 00:14:12.726 altname ens801f1np1 00:14:12.726 inet 192.168.100.9/24 scope global cvl_0_1 00:14:12.726 valid_lft forever preferred_lft forever 00:14:12.726 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:12.726 valid_lft forever preferred_lft forever 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:12.726 192.168.100.9' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:12.726 192.168.100.9' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:12.726 192.168.100.9' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1282234 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1282234 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # '[' -z 1282234 ']' 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:12.726 08:52:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:12.726 [2024-06-09 08:52:34.754262] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:12.726 [2024-06-09 08:52:34.754306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.726 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.726 [2024-06-09 08:52:34.809556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.726 [2024-06-09 08:52:34.888501] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.726 [2024-06-09 08:52:34.888535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.726 [2024-06-09 08:52:34.888542] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.726 [2024-06-09 08:52:34.888548] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.726 [2024-06-09 08:52:34.888553] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.726 [2024-06-09 08:52:34.888615] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.726 [2024-06-09 08:52:34.888709] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.726 [2024-06-09 08:52:34.888796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.726 [2024-06-09 08:52:34.888797] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@863 -- # return 0 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:13.294 [2024-06-09 08:52:35.764568] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x11168f0/0x1115f30) succeed. 00:14:13.294 [2024-06-09 08:52:35.773371] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1117ca0/0x11164b0) succeed. 00:14:13.294 [2024-06-09 08:52:35.773391] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:13.294 08:52:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:13.552 Malloc1 00:14:13.552 08:52:35 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:13.811 Malloc2 00:14:13.811 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.811 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:14.070 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:14.329 [2024-06-09 08:52:36.673365] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbd6680e-ced6-44b0-94eb-5cbfeb8c79c0 -a 192.168.100.8 -s 4420 -i 4 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:14:14.329 08:52:36 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:16.863 [ 0]:0x1 00:14:16.863 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.864 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:16.864 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=069a38331ce94dd0a98551de69c121db 00:14:16.864 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 069a38331ce94dd0a98551de69c121db != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.864 08:52:38 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:16.864 [ 0]:0x1 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=069a38331ce94dd0a98551de69c121db 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 069a38331ce94dd0a98551de69c121db != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:16.864 [ 1]:0x2 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:14:16.864 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.122 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.123 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.381 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:17.639 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:14:17.639 08:52:39 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbd6680e-ced6-44b0-94eb-5cbfeb8c79c0 -a 192.168.100.8 -s 4420 -i 4 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 1 ]] 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=1 00:14:17.639 08:52:40 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:19.542 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:19.801 [ 0]:0x2 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.801 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:20.060 [ 0]:0x1 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=069a38331ce94dd0a98551de69c121db 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 069a38331ce94dd0a98551de69c121db != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:20.060 [ 1]:0x2 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.060 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:20.319 [ 0]:0x2 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:14:20.319 08:52:42 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.578 08:52:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cbd6680e-ced6-44b0-94eb-5cbfeb8c79c0 -a 192.168.100.8 -s 4420 -i 4 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local i=0 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:14:20.835 08:52:43 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # sleep 2 00:14:23.366 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:23.366 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # return 0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:23.367 [ 0]:0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=069a38331ce94dd0a98551de69c121db 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 069a38331ce94dd0a98551de69c121db != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:23.367 [ 1]:0x2 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:23.367 [ 0]:0x2 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:14:23.367 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:23.626 [2024-06-09 08:52:45.956700] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:23.626 request: 00:14:23.626 { 00:14:23.626 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:23.626 "nsid": 2, 00:14:23.626 "host": "nqn.2016-06.io.spdk:host1", 00:14:23.626 "method": "nvmf_ns_remove_host", 00:14:23.626 "req_id": 1 00:14:23.626 } 00:14:23.626 Got JSON-RPC error response 00:14:23.626 response: 00:14:23.626 { 00:14:23.626 "code": -32602, 00:14:23.626 "message": "Invalid parameters" 00:14:23.626 } 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:23.626 08:52:45 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:23.626 [ 0]:0x2 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=5f63b408e13a424388c354ab164a6ad4 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 5f63b408e13a424388c354ab164a6ad4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:14:23.626 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:23.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.884 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:24.142 rmmod nvme_rdma 00:14:24.142 rmmod nvme_fabrics 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1282234 ']' 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1282234 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@949 -- # '[' -z 1282234 ']' 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@953 -- # kill -0 1282234 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # uname 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:24.142 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1282234 00:14:24.143 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:24.143 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:24.143 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1282234' 00:14:24.143 killing process with pid 1282234 00:14:24.143 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@968 -- # kill 1282234 00:14:24.143 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@973 -- # wait 1282234 00:14:24.401 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.401 08:52:46 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:24.401 00:14:24.401 real 0m17.150s 00:14:24.401 user 0m52.338s 00:14:24.401 sys 0m4.876s 00:14:24.401 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:24.401 08:52:46 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:24.401 ************************************ 00:14:24.401 END TEST nvmf_ns_masking 00:14:24.401 ************************************ 00:14:24.660 08:52:46 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:24.660 08:52:46 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:24.660 08:52:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:24.660 08:52:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:24.660 08:52:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:24.660 ************************************ 00:14:24.660 START TEST nvmf_nvme_cli 00:14:24.660 ************************************ 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:24.660 * Looking for test storage... 00:14:24.660 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.660 08:52:47 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:30.011 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:30.011 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@377 -- # modinfo irdma 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:30.011 Found net devices under 0000:af:00.0: cvl_0_0 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:30.011 Found net devices under 0000:af:00.1: cvl_0_1 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:30.011 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:30.011 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:30.011 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:30.011 altname enp175s0f0np0 00:14:30.012 altname ens801f0np0 00:14:30.012 inet 192.168.100.8/24 scope global cvl_0_0 00:14:30.012 valid_lft forever preferred_lft forever 00:14:30.012 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:30.012 valid_lft forever preferred_lft forever 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:30.012 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:30.012 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:30.012 altname enp175s0f1np1 00:14:30.012 altname ens801f1np1 00:14:30.012 inet 192.168.100.9/24 scope global cvl_0_1 00:14:30.012 valid_lft forever preferred_lft forever 00:14:30.012 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:30.012 valid_lft forever preferred_lft forever 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:30.012 192.168.100.9' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:30.012 192.168.100.9' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:30.012 192.168.100.9' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1287472 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1287472 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # '[' -z 1287472 ']' 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.012 08:52:52 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.012 [2024-06-09 08:52:52.532627] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:30.012 [2024-06-09 08:52:52.532670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.012 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.271 [2024-06-09 08:52:52.587700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.271 [2024-06-09 08:52:52.659841] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.271 [2024-06-09 08:52:52.659880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.271 [2024-06-09 08:52:52.659886] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.271 [2024-06-09 08:52:52.659892] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.271 [2024-06-09 08:52:52.659896] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.271 [2024-06-09 08:52:52.659962] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.271 [2024-06-09 08:52:52.660056] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.271 [2024-06-09 08:52:52.660126] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.271 [2024-06-09 08:52:52.660127] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@863 -- # return 0 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:30.838 [2024-06-09 08:52:53.377787] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1c378f0/0x1c36f30) succeed. 00:14:30.838 [2024-06-09 08:52:53.386818] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1c38ca0/0x1c374b0) succeed. 00:14:30.838 [2024-06-09 08:52:53.386841] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:30.838 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 Malloc0 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 Malloc1 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 [2024-06-09 08:52:53.467744] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:14:31.097 00:14:31.097 Discovery Log Number of Records 2, Generation counter 2 00:14:31.097 =====Discovery Log Entry 0====== 00:14:31.097 trtype: rdma 00:14:31.097 adrfam: ipv4 00:14:31.097 subtype: current discovery subsystem 00:14:31.097 treq: not required 00:14:31.097 portid: 0 00:14:31.097 trsvcid: 4420 00:14:31.097 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:31.097 traddr: 192.168.100.8 00:14:31.097 eflags: explicit discovery connections, duplicate discovery information 00:14:31.097 rdma_prtype: not specified 00:14:31.097 rdma_qptype: connected 00:14:31.097 rdma_cms: rdma-cm 00:14:31.097 rdma_pkey: 0x0000 00:14:31.097 =====Discovery Log Entry 1====== 00:14:31.097 trtype: rdma 00:14:31.097 adrfam: ipv4 00:14:31.097 subtype: nvme subsystem 00:14:31.097 treq: not required 00:14:31.097 portid: 0 00:14:31.097 trsvcid: 4420 00:14:31.097 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:31.097 traddr: 192.168.100.8 00:14:31.097 eflags: none 00:14:31.097 rdma_prtype: not specified 00:14:31.097 rdma_qptype: connected 00:14:31.097 rdma_cms: rdma-cm 00:14:31.097 rdma_pkey: 0x0000 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=2 00:14:31.097 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local i=0 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # [[ -n 2 ]] 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # nvme_device_counter=2 00:14:31.356 08:52:53 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # sleep 2 00:14:33.260 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # nvme_devices=2 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # return 0 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme1n2 00:14:33.518 /dev/nvme1n1 00:14:33.518 /dev/nvme0n2 00:14:33.518 /dev/nvme0n1 ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n2 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n2 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme1n1 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme1n1 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=4 00:14:33.518 08:52:55 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # local i=0 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1230 -- # return 0 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:34.452 rmmod nvme_rdma 00:14:34.452 rmmod nvme_fabrics 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1287472 ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1287472 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@949 -- # '[' -z 1287472 ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # kill -0 1287472 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # uname 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1287472 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1287472' 00:14:34.452 killing process with pid 1287472 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # kill 1287472 00:14:34.452 08:52:56 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # wait 1287472 00:14:34.710 08:52:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.710 08:52:57 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:34.711 00:14:34.711 real 0m10.157s 00:14:34.711 user 0m19.537s 00:14:34.711 sys 0m4.577s 00:14:34.711 08:52:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:34.711 08:52:57 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:34.711 ************************************ 00:14:34.711 END TEST nvmf_nvme_cli 00:14:34.711 ************************************ 00:14:34.711 08:52:57 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:14:34.711 08:52:57 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:14:34.711 08:52:57 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:34.711 08:52:57 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:34.711 08:52:57 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:34.711 ************************************ 00:14:34.711 START TEST nvmf_host_management 00:14:34.711 ************************************ 00:14:34.711 08:52:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:14:34.969 * Looking for test storage... 00:14:34.969 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:34.969 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:34.970 08:52:57 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:40.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:40.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@377 -- # modinfo irdma 00:14:40.239 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:40.240 Found net devices under 0000:af:00.0: cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:40.240 Found net devices under 0000:af:00.1: cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:40.240 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:40.240 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:40.240 altname enp175s0f0np0 00:14:40.240 altname ens801f0np0 00:14:40.240 inet 192.168.100.8/24 scope global cvl_0_0 00:14:40.240 valid_lft forever preferred_lft forever 00:14:40.240 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:40.240 valid_lft forever preferred_lft forever 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:40.240 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:40.240 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:40.240 altname enp175s0f1np1 00:14:40.240 altname ens801f1np1 00:14:40.240 inet 192.168.100.9/24 scope global cvl_0_1 00:14:40.240 valid_lft forever preferred_lft forever 00:14:40.240 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:40.240 valid_lft forever preferred_lft forever 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:40.240 192.168.100.9' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:40.240 192.168.100.9' 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:40.240 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:40.240 192.168.100.9' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1291232 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1291232 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1291232 ']' 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:40.241 08:53:02 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:40.241 [2024-06-09 08:53:02.610691] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:40.241 [2024-06-09 08:53:02.610746] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.241 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.241 [2024-06-09 08:53:02.665699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.241 [2024-06-09 08:53:02.739643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.241 [2024-06-09 08:53:02.739684] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.241 [2024-06-09 08:53:02.739691] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.241 [2024-06-09 08:53:02.739696] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.241 [2024-06-09 08:53:02.739701] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.241 [2024-06-09 08:53:02.739819] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.241 [2024-06-09 08:53:02.739896] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.241 [2024-06-09 08:53:02.739981] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.241 [2024-06-09 08:53:02.739983] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 [2024-06-09 08:53:03.484008] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x15ffbe0/0x15ff220) succeed. 00:14:41.176 [2024-06-09 08:53:03.492863] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1600f90/0x15ff7a0) succeed. 00:14:41.176 [2024-06-09 08:53:03.492885] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.176 Malloc0 00:14:41.176 [2024-06-09 08:53:03.555881] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:41.176 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1291499 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1291499 /var/tmp/bdevperf.sock 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # '[' -z 1291499 ']' 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:41.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:41.177 { 00:14:41.177 "params": { 00:14:41.177 "name": "Nvme$subsystem", 00:14:41.177 "trtype": "$TEST_TRANSPORT", 00:14:41.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:41.177 "adrfam": "ipv4", 00:14:41.177 "trsvcid": "$NVMF_PORT", 00:14:41.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:41.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:41.177 "hdgst": ${hdgst:-false}, 00:14:41.177 "ddgst": ${ddgst:-false} 00:14:41.177 }, 00:14:41.177 "method": "bdev_nvme_attach_controller" 00:14:41.177 } 00:14:41.177 EOF 00:14:41.177 )") 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:41.177 08:53:03 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:41.177 "params": { 00:14:41.177 "name": "Nvme0", 00:14:41.177 "trtype": "rdma", 00:14:41.177 "traddr": "192.168.100.8", 00:14:41.177 "adrfam": "ipv4", 00:14:41.177 "trsvcid": "4420", 00:14:41.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:41.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:41.177 "hdgst": false, 00:14:41.177 "ddgst": false 00:14:41.177 }, 00:14:41.177 "method": "bdev_nvme_attach_controller" 00:14:41.177 }' 00:14:41.177 [2024-06-09 08:53:03.647930] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:41.177 [2024-06-09 08:53:03.647974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291499 ] 00:14:41.177 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.177 [2024-06-09 08:53:03.703523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.435 [2024-06-09 08:53:03.777995] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.435 Running I/O for 10 seconds... 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@863 -- # return 0 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1659 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1659 -ge 100 ']' 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:42.001 08:53:04 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:42.568 [2024-06-09 08:53:05.072745] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:42.568 [2024-06-09 08:53:05.072782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.072990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0xd85f9d80 00:14:42.568 [2024-06-09 08:53:05.072996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x10bd2ec8 00:14:42.568 [2024-06-09 08:53:05.073112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.568 [2024-06-09 08:53:05.073121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x10bd2ec8 00:14:42.569 [2024-06-09 08:53:05.073205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:102016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:102784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:103040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x6235b528 00:14:42.569 [2024-06-09 08:53:05.073422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0xfb13dbf2 00:14:42.569 [2024-06-09 08:53:05.073610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb38000 len:0x10000 key:0xac9bda6b 00:14:42.569 [2024-06-09 08:53:05.073625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb59000 len:0x10000 key:0xac9bda6b 00:14:42.569 [2024-06-09 08:53:05.073639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.569 [2024-06-09 08:53:05.073647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.073662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.073678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.073692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.073706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbfe000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.073720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc1f000 len:0x10000 key:0xac9bda6b 00:14:42.570 [2024-06-09 08:53:05.073731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:61cb70 sqhd:1580 p:0 m:0 dnr:0 00:14:42.570 [2024-06-09 08:53:05.074066] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:14:42.570 [2024-06-09 08:53:05.074952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:42.570 task offset: 98304 on job bdev=Nvme0n1 fails 00:14:42.570 00:14:42.570 Latency(us) 00:14:42.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.570 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:42.570 Job: Nvme0n1 ended in about 1.13 seconds with error 00:14:42.570 Verification LBA range: start 0x0 length 0x400 00:14:42.570 Nvme0n1 : 1.13 1584.92 99.06 56.86 0.00 38480.84 1771.03 563235.11 00:14:42.570 =================================================================================================================== 00:14:42.570 Total : 1584.92 99.06 56.86 0.00 38480.84 1771.03 563235.11 00:14:42.570 [2024-06-09 08:53:05.076538] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:42.570 [2024-06-09 08:53:05.076550] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:14:42.570 [2024-06-09 08:53:05.090295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:42.570 [2024-06-09 08:53:05.109462] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1291499 00:14:43.137 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1291499) - No such process 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:43.137 { 00:14:43.137 "params": { 00:14:43.137 "name": "Nvme$subsystem", 00:14:43.137 "trtype": "$TEST_TRANSPORT", 00:14:43.137 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:43.137 "adrfam": "ipv4", 00:14:43.137 "trsvcid": "$NVMF_PORT", 00:14:43.137 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:43.137 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:43.137 "hdgst": ${hdgst:-false}, 00:14:43.137 "ddgst": ${ddgst:-false} 00:14:43.137 }, 00:14:43.137 "method": "bdev_nvme_attach_controller" 00:14:43.137 } 00:14:43.137 EOF 00:14:43.137 )") 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:43.137 08:53:05 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:43.137 "params": { 00:14:43.137 "name": "Nvme0", 00:14:43.137 "trtype": "rdma", 00:14:43.137 "traddr": "192.168.100.8", 00:14:43.137 "adrfam": "ipv4", 00:14:43.137 "trsvcid": "4420", 00:14:43.137 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.137 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:43.137 "hdgst": false, 00:14:43.137 "ddgst": false 00:14:43.137 }, 00:14:43.137 "method": "bdev_nvme_attach_controller" 00:14:43.137 }' 00:14:43.137 [2024-06-09 08:53:05.588411] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:43.137 [2024-06-09 08:53:05.588454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291746 ] 00:14:43.137 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.137 [2024-06-09 08:53:05.643438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.396 [2024-06-09 08:53:05.713679] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.396 Running I/O for 1 seconds... 00:14:44.772 00:14:44.772 Latency(us) 00:14:44.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.772 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:44.772 Verification LBA range: start 0x0 length 0x400 00:14:44.772 Nvme0n1 : 1.01 3157.72 197.36 0.00 0.00 19853.91 1404.34 33704.23 00:14:44.772 =================================================================================================================== 00:14:44.772 Total : 3157.72 197.36 0.00 0.00 19853.91 1404.34 33704.23 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:44.772 rmmod nvme_rdma 00:14:44.772 rmmod nvme_fabrics 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1291232 ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1291232 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@949 -- # '[' -z 1291232 ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@953 -- # kill -0 1291232 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # uname 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1291232 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1291232' 00:14:44.772 killing process with pid 1291232 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@968 -- # kill 1291232 00:14:44.772 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@973 -- # wait 1291232 00:14:45.030 [2024-06-09 08:53:07.404905] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:45.030 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:45.031 08:53:07 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:45.031 08:53:07 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:45.031 00:14:45.031 real 0m10.192s 00:14:45.031 user 0m23.484s 00:14:45.031 sys 0m4.751s 00:14:45.031 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1125 -- # xtrace_disable 00:14:45.031 08:53:07 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:45.031 ************************************ 00:14:45.031 END TEST nvmf_host_management 00:14:45.031 ************************************ 00:14:45.031 08:53:07 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:14:45.031 08:53:07 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:14:45.031 08:53:07 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:14:45.031 08:53:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:45.031 ************************************ 00:14:45.031 START TEST nvmf_lvol 00:14:45.031 ************************************ 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:14:45.031 * Looking for test storage... 00:14:45.031 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.031 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:45.290 08:53:07 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:45.291 08:53:07 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:50.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:50.562 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@377 -- # modinfo irdma 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:50.562 Found net devices under 0000:af:00.0: cvl_0_0 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:50.562 Found net devices under 0000:af:00.1: cvl_0_1 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:50.562 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:14:50.562 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:50.562 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:14:50.563 altname enp175s0f0np0 00:14:50.563 altname ens801f0np0 00:14:50.563 inet 192.168.100.8/24 scope global cvl_0_0 00:14:50.563 valid_lft forever preferred_lft forever 00:14:50.563 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:14:50.563 valid_lft forever preferred_lft forever 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:14:50.563 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:14:50.563 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:14:50.563 altname enp175s0f1np1 00:14:50.563 altname ens801f1np1 00:14:50.563 inet 192.168.100.9/24 scope global cvl_0_1 00:14:50.563 valid_lft forever preferred_lft forever 00:14:50.563 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:14:50.563 valid_lft forever preferred_lft forever 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_0 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:50.563 192.168.100.9' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:50.563 192.168.100.9' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:50.563 192.168.100.9' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:50.563 08:53:12 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@723 -- # xtrace_disable 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1295176 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1295176 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # '[' -z 1295176 ']' 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@835 -- # local max_retries=100 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@839 -- # xtrace_disable 00:14:50.563 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:50.563 [2024-06-09 08:53:13.058010] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:14:50.563 [2024-06-09 08:53:13.058052] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.563 [2024-06-09 08:53:13.114338] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:50.820 [2024-06-09 08:53:13.190204] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.820 [2024-06-09 08:53:13.190241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.820 [2024-06-09 08:53:13.190247] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.820 [2024-06-09 08:53:13.190253] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.820 [2024-06-09 08:53:13.190258] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.820 [2024-06-09 08:53:13.190302] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.820 [2024-06-09 08:53:13.190380] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.820 [2024-06-09 08:53:13.190382] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@863 -- # return 0 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@729 -- # xtrace_disable 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.384 08:53:13 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:51.642 [2024-06-09 08:53:14.043581] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x2234dd0/0x2234410) succeed. 00:14:51.642 [2024-06-09 08:53:14.052228] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2236100/0x2234990) succeed. 00:14:51.642 [2024-06-09 08:53:14.052252] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:14:51.642 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.900 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:51.900 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.158 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:52.158 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:52.158 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:52.416 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c402dfcd-5d34-4d16-aa3a-b6a7bec1541f 00:14:52.416 08:53:14 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c402dfcd-5d34-4d16-aa3a-b6a7bec1541f lvol 20 00:14:52.675 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=94fdc957-21fb-4c63-b908-183a5ada4dfc 00:14:52.675 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:52.675 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 94fdc957-21fb-4c63-b908-183a5ada4dfc 00:14:52.933 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:53.224 [2024-06-09 08:53:15.526858] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:53.224 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:53.224 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1295646 00:14:53.224 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:53.224 08:53:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:53.224 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.598 08:53:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 94fdc957-21fb-4c63-b908-183a5ada4dfc MY_SNAPSHOT 00:14:54.598 08:53:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5058fc66-5d93-42e0-9b06-98491f16c402 00:14:54.598 08:53:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 94fdc957-21fb-4c63-b908-183a5ada4dfc 30 00:14:54.598 08:53:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5058fc66-5d93-42e0-9b06-98491f16c402 MY_CLONE 00:14:54.856 08:53:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b09c7f47-c87a-44ea-8db7-83ca99847c86 00:14:54.856 08:53:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b09c7f47-c87a-44ea-8db7-83ca99847c86 00:14:55.115 08:53:17 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1295646 00:15:05.083 Initializing NVMe Controllers 00:15:05.083 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:15:05.083 Controller IO queue size 128, less than required. 00:15:05.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:05.083 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:05.083 Initialization complete. Launching workers. 00:15:05.083 ======================================================== 00:15:05.083 Latency(us) 00:15:05.083 Device Information : IOPS MiB/s Average min max 00:15:05.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16570.20 64.73 7726.24 2146.01 49771.60 00:15:05.083 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16680.10 65.16 7675.41 3019.96 40081.91 00:15:05.083 ======================================================== 00:15:05.083 Total : 33250.30 129.88 7700.74 2146.01 49771.60 00:15:05.083 00:15:05.083 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:05.083 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 94fdc957-21fb-4c63-b908-183a5ada4dfc 00:15:05.083 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c402dfcd-5d34-4d16-aa3a-b6a7bec1541f 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:05.343 rmmod nvme_rdma 00:15:05.343 rmmod nvme_fabrics 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1295176 ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1295176 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@949 -- # '[' -z 1295176 ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@953 -- # kill -0 1295176 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # uname 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1295176 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1295176' 00:15:05.343 killing process with pid 1295176 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@968 -- # kill 1295176 00:15:05.343 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@973 -- # wait 1295176 00:15:05.616 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.616 08:53:27 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:05.616 00:15:05.616 real 0m20.494s 00:15:05.616 user 1m10.477s 00:15:05.616 sys 0m5.148s 00:15:05.616 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:05.616 08:53:27 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:05.616 ************************************ 00:15:05.616 END TEST nvmf_lvol 00:15:05.616 ************************************ 00:15:05.616 08:53:28 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:05.616 08:53:28 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:05.616 08:53:28 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:05.616 08:53:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:05.616 ************************************ 00:15:05.616 START TEST nvmf_lvs_grow 00:15:05.616 ************************************ 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:15:05.616 * Looking for test storage... 00:15:05.616 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:05.616 08:53:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:05.928 08:53:28 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:11.204 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:11.204 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@377 -- # modinfo irdma 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:11.204 Found net devices under 0000:af:00.0: cvl_0_0 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:11.204 Found net devices under 0000:af:00.1: cvl_0_1 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:11.204 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:11.205 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:11.205 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:11.205 altname enp175s0f0np0 00:15:11.205 altname ens801f0np0 00:15:11.205 inet 192.168.100.8/24 scope global cvl_0_0 00:15:11.205 valid_lft forever preferred_lft forever 00:15:11.205 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:11.205 valid_lft forever preferred_lft forever 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:11.205 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:11.205 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:11.205 altname enp175s0f1np1 00:15:11.205 altname ens801f1np1 00:15:11.205 inet 192.168.100.9/24 scope global cvl_0_1 00:15:11.205 valid_lft forever preferred_lft forever 00:15:11.205 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:11.205 valid_lft forever preferred_lft forever 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:11.205 192.168.100.9' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:11.205 192.168.100.9' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:11.205 192.168.100.9' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1300680 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1300680 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # '[' -z 1300680 ']' 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:11.205 08:53:33 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.205 [2024-06-09 08:53:33.687396] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:11.205 [2024-06-09 08:53:33.687437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.205 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.205 [2024-06-09 08:53:33.741774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.464 [2024-06-09 08:53:33.815715] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.464 [2024-06-09 08:53:33.815757] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.464 [2024-06-09 08:53:33.815764] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.464 [2024-06-09 08:53:33.815770] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.464 [2024-06-09 08:53:33.815775] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.464 [2024-06-09 08:53:33.815800] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@863 -- # return 0 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.032 08:53:34 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:12.291 [2024-06-09 08:53:34.671056] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xb657e0/0xb64e20) succeed. 00:15:12.291 [2024-06-09 08:53:34.679749] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xb66a90/0xb653a0) succeed. 00:15:12.291 [2024-06-09 08:53:34.679771] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:12.291 ************************************ 00:15:12.291 START TEST lvs_grow_clean 00:15:12.291 ************************************ 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # lvs_grow 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.291 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.549 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:12.549 08:53:34 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:12.808 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a42b9ae3-13d5-4c85-90a8-4637354fc636 lvol 150 00:15:13.066 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=40767ad7-24b5-4d90-b004-bbd347a899a3 00:15:13.066 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:13.066 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:13.066 [2024-06-09 08:53:35.617079] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:13.066 [2024-06-09 08:53:35.617126] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:13.066 true 00:15:13.325 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:13.325 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:13.325 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:13.325 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.584 08:53:35 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40767ad7-24b5-4d90-b004-bbd347a899a3 00:15:13.584 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:13.842 [2024-06-09 08:53:36.270996] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:13.842 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1301197 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1301197 /var/tmp/bdevperf.sock 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # '[' -z 1301197 ']' 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:14.101 08:53:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:14.101 [2024-06-09 08:53:36.488110] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:14.101 [2024-06-09 08:53:36.488159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301197 ] 00:15:14.102 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.102 [2024-06-09 08:53:36.540792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.102 [2024-06-09 08:53:36.617895] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.036 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:15.036 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@863 -- # return 0 00:15:15.036 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:15.036 Nvme0n1 00:15:15.036 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:15.295 [ 00:15:15.295 { 00:15:15.295 "name": "Nvme0n1", 00:15:15.295 "aliases": [ 00:15:15.295 "40767ad7-24b5-4d90-b004-bbd347a899a3" 00:15:15.295 ], 00:15:15.295 "product_name": "NVMe disk", 00:15:15.295 "block_size": 4096, 00:15:15.295 "num_blocks": 38912, 00:15:15.295 "uuid": "40767ad7-24b5-4d90-b004-bbd347a899a3", 00:15:15.295 "assigned_rate_limits": { 00:15:15.295 "rw_ios_per_sec": 0, 00:15:15.295 "rw_mbytes_per_sec": 0, 00:15:15.295 "r_mbytes_per_sec": 0, 00:15:15.295 "w_mbytes_per_sec": 0 00:15:15.295 }, 00:15:15.295 "claimed": false, 00:15:15.295 "zoned": false, 00:15:15.295 "supported_io_types": { 00:15:15.295 "read": true, 00:15:15.295 "write": true, 00:15:15.295 "unmap": true, 00:15:15.295 "write_zeroes": true, 00:15:15.295 "flush": true, 00:15:15.295 "reset": true, 00:15:15.295 "compare": true, 00:15:15.295 "compare_and_write": true, 00:15:15.295 "abort": true, 00:15:15.295 "nvme_admin": true, 00:15:15.295 "nvme_io": true 00:15:15.295 }, 00:15:15.295 "memory_domains": [ 00:15:15.295 { 00:15:15.295 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:15:15.295 "dma_device_type": 0 00:15:15.295 } 00:15:15.295 ], 00:15:15.295 "driver_specific": { 00:15:15.295 "nvme": [ 00:15:15.295 { 00:15:15.295 "trid": { 00:15:15.295 "trtype": "RDMA", 00:15:15.295 "adrfam": "IPv4", 00:15:15.295 "traddr": "192.168.100.8", 00:15:15.295 "trsvcid": "4420", 00:15:15.295 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:15.295 }, 00:15:15.295 "ctrlr_data": { 00:15:15.295 "cntlid": 1, 00:15:15.295 "vendor_id": "0x8086", 00:15:15.295 "model_number": "SPDK bdev Controller", 00:15:15.295 "serial_number": "SPDK0", 00:15:15.295 "firmware_revision": "24.09", 00:15:15.295 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.295 "oacs": { 00:15:15.295 "security": 0, 00:15:15.295 "format": 0, 00:15:15.295 "firmware": 0, 00:15:15.295 "ns_manage": 0 00:15:15.295 }, 00:15:15.295 "multi_ctrlr": true, 00:15:15.295 "ana_reporting": false 00:15:15.295 }, 00:15:15.295 "vs": { 00:15:15.295 "nvme_version": "1.3" 00:15:15.295 }, 00:15:15.295 "ns_data": { 00:15:15.295 "id": 1, 00:15:15.295 "can_share": true 00:15:15.295 } 00:15:15.295 } 00:15:15.295 ], 00:15:15.295 "mp_policy": "active_passive" 00:15:15.295 } 00:15:15.295 } 00:15:15.295 ] 00:15:15.295 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1301370 00:15:15.295 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:15.295 08:53:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.295 Running I/O for 10 seconds... 00:15:16.672 Latency(us) 00:15:16.672 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.672 Nvme0n1 : 1.00 34787.00 135.89 0.00 0.00 0.00 0.00 0.00 00:15:16.672 =================================================================================================================== 00:15:16.672 Total : 34787.00 135.89 0.00 0.00 0.00 0.00 0.00 00:15:16.672 00:15:17.240 08:53:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:17.499 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.499 Nvme0n1 : 2.00 35181.00 137.43 0.00 0.00 0.00 0.00 0.00 00:15:17.499 =================================================================================================================== 00:15:17.499 Total : 35181.00 137.43 0.00 0.00 0.00 0.00 0.00 00:15:17.499 00:15:17.499 true 00:15:17.499 08:53:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:17.499 08:53:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:17.757 08:53:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:17.758 08:53:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:17.758 08:53:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1301370 00:15:18.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.325 Nvme0n1 : 3.00 35113.33 137.16 0.00 0.00 0.00 0.00 0.00 00:15:18.325 =================================================================================================================== 00:15:18.325 Total : 35113.33 137.16 0.00 0.00 0.00 0.00 0.00 00:15:18.325 00:15:19.261 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.261 Nvme0n1 : 4.00 35298.00 137.88 0.00 0.00 0.00 0.00 0.00 00:15:19.261 =================================================================================================================== 00:15:19.261 Total : 35298.00 137.88 0.00 0.00 0.00 0.00 0.00 00:15:19.261 00:15:20.636 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.636 Nvme0n1 : 5.00 35410.40 138.32 0.00 0.00 0.00 0.00 0.00 00:15:20.636 =================================================================================================================== 00:15:20.636 Total : 35410.40 138.32 0.00 0.00 0.00 0.00 0.00 00:15:20.636 00:15:21.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.571 Nvme0n1 : 6.00 35478.33 138.59 0.00 0.00 0.00 0.00 0.00 00:15:21.571 =================================================================================================================== 00:15:21.571 Total : 35478.33 138.59 0.00 0.00 0.00 0.00 0.00 00:15:21.571 00:15:22.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.505 Nvme0n1 : 7.00 35537.57 138.82 0.00 0.00 0.00 0.00 0.00 00:15:22.505 =================================================================================================================== 00:15:22.505 Total : 35537.57 138.82 0.00 0.00 0.00 0.00 0.00 00:15:22.505 00:15:23.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.440 Nvme0n1 : 8.00 35583.38 139.00 0.00 0.00 0.00 0.00 0.00 00:15:23.440 =================================================================================================================== 00:15:23.440 Total : 35583.38 139.00 0.00 0.00 0.00 0.00 0.00 00:15:23.440 00:15:24.375 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.375 Nvme0n1 : 9.00 35616.67 139.13 0.00 0.00 0.00 0.00 0.00 00:15:24.375 =================================================================================================================== 00:15:24.375 Total : 35616.67 139.13 0.00 0.00 0.00 0.00 0.00 00:15:24.375 00:15:25.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.310 Nvme0n1 : 10.00 35647.30 139.25 0.00 0.00 0.00 0.00 0.00 00:15:25.310 =================================================================================================================== 00:15:25.310 Total : 35647.30 139.25 0.00 0.00 0.00 0.00 0.00 00:15:25.310 00:15:25.310 00:15:25.310 Latency(us) 00:15:25.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.310 Nvme0n1 : 10.00 35647.71 139.25 0.00 0.00 3587.68 2293.76 19223.89 00:15:25.310 =================================================================================================================== 00:15:25.310 Total : 35647.71 139.25 0.00 0.00 3587.68 2293.76 19223.89 00:15:25.310 0 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1301197 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@949 -- # '[' -z 1301197 ']' 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # kill -0 1301197 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # uname 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:25.310 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1301197 00:15:25.569 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:25.569 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:25.569 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1301197' 00:15:25.569 killing process with pid 1301197 00:15:25.569 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # kill 1301197 00:15:25.569 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.569 00:15:25.569 Latency(us) 00:15:25.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.569 =================================================================================================================== 00:15:25.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.569 08:53:47 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # wait 1301197 00:15:25.569 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:25.828 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:26.087 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:26.087 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:26.087 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:26.087 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:26.087 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:26.347 [2024-06-09 08:53:48.750332] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.347 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:26.605 request: 00:15:26.605 { 00:15:26.605 "uuid": "a42b9ae3-13d5-4c85-90a8-4637354fc636", 00:15:26.605 "method": "bdev_lvol_get_lvstores", 00:15:26.605 "req_id": 1 00:15:26.605 } 00:15:26.605 Got JSON-RPC error response 00:15:26.605 response: 00:15:26.605 { 00:15:26.605 "code": -19, 00:15:26.605 "message": "No such device" 00:15:26.605 } 00:15:26.605 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:15:26.605 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:26.606 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:26.606 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:26.606 08:53:48 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.606 aio_bdev 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 40767ad7-24b5-4d90-b004-bbd347a899a3 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_name=40767ad7-24b5-4d90-b004-bbd347a899a3 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local i 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:26.606 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:26.865 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 40767ad7-24b5-4d90-b004-bbd347a899a3 -t 2000 00:15:27.123 [ 00:15:27.123 { 00:15:27.123 "name": "40767ad7-24b5-4d90-b004-bbd347a899a3", 00:15:27.123 "aliases": [ 00:15:27.123 "lvs/lvol" 00:15:27.123 ], 00:15:27.123 "product_name": "Logical Volume", 00:15:27.123 "block_size": 4096, 00:15:27.123 "num_blocks": 38912, 00:15:27.123 "uuid": "40767ad7-24b5-4d90-b004-bbd347a899a3", 00:15:27.123 "assigned_rate_limits": { 00:15:27.123 "rw_ios_per_sec": 0, 00:15:27.123 "rw_mbytes_per_sec": 0, 00:15:27.123 "r_mbytes_per_sec": 0, 00:15:27.123 "w_mbytes_per_sec": 0 00:15:27.123 }, 00:15:27.123 "claimed": false, 00:15:27.123 "zoned": false, 00:15:27.123 "supported_io_types": { 00:15:27.123 "read": true, 00:15:27.123 "write": true, 00:15:27.123 "unmap": true, 00:15:27.123 "write_zeroes": true, 00:15:27.123 "flush": false, 00:15:27.123 "reset": true, 00:15:27.123 "compare": false, 00:15:27.123 "compare_and_write": false, 00:15:27.123 "abort": false, 00:15:27.123 "nvme_admin": false, 00:15:27.123 "nvme_io": false 00:15:27.123 }, 00:15:27.123 "driver_specific": { 00:15:27.123 "lvol": { 00:15:27.123 "lvol_store_uuid": "a42b9ae3-13d5-4c85-90a8-4637354fc636", 00:15:27.123 "base_bdev": "aio_bdev", 00:15:27.123 "thin_provision": false, 00:15:27.123 "num_allocated_clusters": 38, 00:15:27.123 "snapshot": false, 00:15:27.123 "clone": false, 00:15:27.123 "esnap_clone": false 00:15:27.123 } 00:15:27.123 } 00:15:27.123 } 00:15:27.123 ] 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # return 0 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:27.123 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:27.382 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:27.382 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40767ad7-24b5-4d90-b004-bbd347a899a3 00:15:27.640 08:53:49 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a42b9ae3-13d5-4c85-90a8-4637354fc636 00:15:27.640 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:27.899 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.899 00:15:27.899 real 0m15.637s 00:15:27.899 user 0m15.706s 00:15:27.899 sys 0m1.006s 00:15:27.899 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:27.899 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:27.899 ************************************ 00:15:27.899 END TEST lvs_grow_clean 00:15:27.899 ************************************ 00:15:27.899 08:53:50 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:27.900 ************************************ 00:15:27.900 START TEST lvs_grow_dirty 00:15:27.900 ************************************ 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # lvs_grow dirty 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.900 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:28.158 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:28.158 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=671a0b94-b503-414c-8322-d96401e6b5dc 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:28.416 08:53:50 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 671a0b94-b503-414c-8322-d96401e6b5dc lvol 150 00:15:28.673 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:28.673 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:28.673 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:28.931 [2024-06-09 08:53:51.275302] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:28.931 [2024-06-09 08:53:51.275348] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:28.931 true 00:15:28.931 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:28.931 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:28.931 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:28.931 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:29.189 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:29.484 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:29.484 [2024-06-09 08:53:51.945242] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:29.484 08:53:51 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1303832 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1303832 /var/tmp/bdevperf.sock 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1303832 ']' 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:29.742 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:29.742 [2024-06-09 08:53:52.157906] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:29.742 [2024-06-09 08:53:52.157962] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303832 ] 00:15:29.742 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.742 [2024-06-09 08:53:52.211798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.742 [2024-06-09 08:53:52.288043] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.676 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:30.676 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:30.676 08:53:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:30.676 Nvme0n1 00:15:30.676 08:53:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:30.934 [ 00:15:30.934 { 00:15:30.934 "name": "Nvme0n1", 00:15:30.934 "aliases": [ 00:15:30.934 "0ff94741-1203-4b6c-bb4c-dfe9fc73c340" 00:15:30.934 ], 00:15:30.934 "product_name": "NVMe disk", 00:15:30.934 "block_size": 4096, 00:15:30.934 "num_blocks": 38912, 00:15:30.934 "uuid": "0ff94741-1203-4b6c-bb4c-dfe9fc73c340", 00:15:30.934 "assigned_rate_limits": { 00:15:30.934 "rw_ios_per_sec": 0, 00:15:30.934 "rw_mbytes_per_sec": 0, 00:15:30.934 "r_mbytes_per_sec": 0, 00:15:30.934 "w_mbytes_per_sec": 0 00:15:30.934 }, 00:15:30.934 "claimed": false, 00:15:30.934 "zoned": false, 00:15:30.934 "supported_io_types": { 00:15:30.934 "read": true, 00:15:30.934 "write": true, 00:15:30.934 "unmap": true, 00:15:30.934 "write_zeroes": true, 00:15:30.934 "flush": true, 00:15:30.934 "reset": true, 00:15:30.934 "compare": true, 00:15:30.934 "compare_and_write": true, 00:15:30.934 "abort": true, 00:15:30.934 "nvme_admin": true, 00:15:30.934 "nvme_io": true 00:15:30.934 }, 00:15:30.934 "memory_domains": [ 00:15:30.934 { 00:15:30.934 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:15:30.934 "dma_device_type": 0 00:15:30.934 } 00:15:30.934 ], 00:15:30.934 "driver_specific": { 00:15:30.934 "nvme": [ 00:15:30.934 { 00:15:30.934 "trid": { 00:15:30.934 "trtype": "RDMA", 00:15:30.934 "adrfam": "IPv4", 00:15:30.934 "traddr": "192.168.100.8", 00:15:30.934 "trsvcid": "4420", 00:15:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:30.934 }, 00:15:30.934 "ctrlr_data": { 00:15:30.934 "cntlid": 1, 00:15:30.934 "vendor_id": "0x8086", 00:15:30.934 "model_number": "SPDK bdev Controller", 00:15:30.934 "serial_number": "SPDK0", 00:15:30.934 "firmware_revision": "24.09", 00:15:30.934 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:30.934 "oacs": { 00:15:30.934 "security": 0, 00:15:30.934 "format": 0, 00:15:30.934 "firmware": 0, 00:15:30.934 "ns_manage": 0 00:15:30.934 }, 00:15:30.934 "multi_ctrlr": true, 00:15:30.934 "ana_reporting": false 00:15:30.934 }, 00:15:30.934 "vs": { 00:15:30.934 "nvme_version": "1.3" 00:15:30.934 }, 00:15:30.934 "ns_data": { 00:15:30.934 "id": 1, 00:15:30.934 "can_share": true 00:15:30.934 } 00:15:30.934 } 00:15:30.934 ], 00:15:30.934 "mp_policy": "active_passive" 00:15:30.934 } 00:15:30.934 } 00:15:30.934 ] 00:15:30.934 08:53:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1304067 00:15:30.934 08:53:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:30.934 08:53:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.934 Running I/O for 10 seconds... 00:15:32.309 Latency(us) 00:15:32.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.309 Nvme0n1 : 1.00 34785.00 135.88 0.00 0.00 0.00 0.00 0.00 00:15:32.309 =================================================================================================================== 00:15:32.309 Total : 34785.00 135.88 0.00 0.00 0.00 0.00 0.00 00:15:32.309 00:15:32.876 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:33.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.134 Nvme0n1 : 2.00 35185.50 137.44 0.00 0.00 0.00 0.00 0.00 00:15:33.134 =================================================================================================================== 00:15:33.134 Total : 35185.50 137.44 0.00 0.00 0.00 0.00 0.00 00:15:33.134 00:15:33.134 true 00:15:33.134 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:33.134 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:33.392 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:33.392 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:33.392 08:53:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1304067 00:15:33.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.959 Nvme0n1 : 3.00 35296.67 137.88 0.00 0.00 0.00 0.00 0.00 00:15:33.959 =================================================================================================================== 00:15:33.959 Total : 35296.67 137.88 0.00 0.00 0.00 0.00 0.00 00:15:33.959 00:15:35.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.365 Nvme0n1 : 4.00 35416.00 138.34 0.00 0.00 0.00 0.00 0.00 00:15:35.365 =================================================================================================================== 00:15:35.365 Total : 35416.00 138.34 0.00 0.00 0.00 0.00 0.00 00:15:35.365 00:15:35.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.947 Nvme0n1 : 5.00 35487.80 138.62 0.00 0.00 0.00 0.00 0.00 00:15:35.947 =================================================================================================================== 00:15:35.947 Total : 35487.80 138.62 0.00 0.00 0.00 0.00 0.00 00:15:35.947 00:15:37.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.322 Nvme0n1 : 6.00 35551.67 138.87 0.00 0.00 0.00 0.00 0.00 00:15:37.322 =================================================================================================================== 00:15:37.322 Total : 35551.67 138.87 0.00 0.00 0.00 0.00 0.00 00:15:37.322 00:15:38.260 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.260 Nvme0n1 : 7.00 35561.71 138.91 0.00 0.00 0.00 0.00 0.00 00:15:38.261 =================================================================================================================== 00:15:38.261 Total : 35561.71 138.91 0.00 0.00 0.00 0.00 0.00 00:15:38.261 00:15:39.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.199 Nvme0n1 : 8.00 35559.88 138.91 0.00 0.00 0.00 0.00 0.00 00:15:39.199 =================================================================================================================== 00:15:39.199 Total : 35559.88 138.91 0.00 0.00 0.00 0.00 0.00 00:15:39.199 00:15:40.133 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.133 Nvme0n1 : 9.00 35584.33 139.00 0.00 0.00 0.00 0.00 0.00 00:15:40.133 =================================================================================================================== 00:15:40.133 Total : 35584.33 139.00 0.00 0.00 0.00 0.00 0.00 00:15:40.133 00:15:41.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.067 Nvme0n1 : 10.00 35609.70 139.10 0.00 0.00 0.00 0.00 0.00 00:15:41.067 =================================================================================================================== 00:15:41.067 Total : 35609.70 139.10 0.00 0.00 0.00 0.00 0.00 00:15:41.067 00:15:41.067 00:15:41.067 Latency(us) 00:15:41.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.067 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.067 Nvme0n1 : 10.00 35610.76 139.10 0.00 0.00 3591.48 2293.76 18350.08 00:15:41.067 =================================================================================================================== 00:15:41.067 Total : 35610.76 139.10 0.00 0.00 3591.48 2293.76 18350.08 00:15:41.067 0 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1303832 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@949 -- # '[' -z 1303832 ']' 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # kill -0 1303832 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # uname 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1303832 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:15:41.067 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:15:41.068 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1303832' 00:15:41.068 killing process with pid 1303832 00:15:41.068 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # kill 1303832 00:15:41.068 Received shutdown signal, test time was about 10.000000 seconds 00:15:41.068 00:15:41.068 Latency(us) 00:15:41.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.068 =================================================================================================================== 00:15:41.068 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:41.068 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # wait 1303832 00:15:41.325 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:41.583 08:54:03 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:41.583 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:41.583 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1300680 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1300680 00:15:41.841 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1300680 Killed "${NVMF_APP[@]}" "$@" 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1306005 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1306005 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # '[' -z 1306005 ']' 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:41.841 08:54:04 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:41.841 [2024-06-09 08:54:04.341768] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:41.841 [2024-06-09 08:54:04.341816] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.841 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.841 [2024-06-09 08:54:04.398273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.099 [2024-06-09 08:54:04.475318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.099 [2024-06-09 08:54:04.475349] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.099 [2024-06-09 08:54:04.475356] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:42.099 [2024-06-09 08:54:04.475362] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:42.099 [2024-06-09 08:54:04.475367] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.099 [2024-06-09 08:54:04.475389] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@863 -- # return 0 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.671 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:42.928 [2024-06-09 08:54:05.327683] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:42.928 [2024-06-09 08:54:05.327772] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:42.928 [2024-06-09 08:54:05.327797] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:42.928 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:43.187 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 -t 2000 00:15:43.187 [ 00:15:43.187 { 00:15:43.187 "name": "0ff94741-1203-4b6c-bb4c-dfe9fc73c340", 00:15:43.187 "aliases": [ 00:15:43.187 "lvs/lvol" 00:15:43.187 ], 00:15:43.187 "product_name": "Logical Volume", 00:15:43.187 "block_size": 4096, 00:15:43.187 "num_blocks": 38912, 00:15:43.187 "uuid": "0ff94741-1203-4b6c-bb4c-dfe9fc73c340", 00:15:43.187 "assigned_rate_limits": { 00:15:43.187 "rw_ios_per_sec": 0, 00:15:43.187 "rw_mbytes_per_sec": 0, 00:15:43.187 "r_mbytes_per_sec": 0, 00:15:43.187 "w_mbytes_per_sec": 0 00:15:43.187 }, 00:15:43.187 "claimed": false, 00:15:43.187 "zoned": false, 00:15:43.187 "supported_io_types": { 00:15:43.187 "read": true, 00:15:43.187 "write": true, 00:15:43.187 "unmap": true, 00:15:43.187 "write_zeroes": true, 00:15:43.187 "flush": false, 00:15:43.187 "reset": true, 00:15:43.187 "compare": false, 00:15:43.187 "compare_and_write": false, 00:15:43.187 "abort": false, 00:15:43.187 "nvme_admin": false, 00:15:43.187 "nvme_io": false 00:15:43.187 }, 00:15:43.187 "driver_specific": { 00:15:43.187 "lvol": { 00:15:43.187 "lvol_store_uuid": "671a0b94-b503-414c-8322-d96401e6b5dc", 00:15:43.187 "base_bdev": "aio_bdev", 00:15:43.187 "thin_provision": false, 00:15:43.187 "num_allocated_clusters": 38, 00:15:43.187 "snapshot": false, 00:15:43.187 "clone": false, 00:15:43.187 "esnap_clone": false 00:15:43.187 } 00:15:43.187 } 00:15:43.187 } 00:15:43.187 ] 00:15:43.187 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:43.187 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:43.187 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:43.445 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:43.445 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:43.445 08:54:05 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:43.703 [2024-06-09 08:54:06.168098] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py ]] 00:15:43.703 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:43.961 request: 00:15:43.961 { 00:15:43.961 "uuid": "671a0b94-b503-414c-8322-d96401e6b5dc", 00:15:43.961 "method": "bdev_lvol_get_lvstores", 00:15:43.961 "req_id": 1 00:15:43.961 } 00:15:43.961 Got JSON-RPC error response 00:15:43.961 response: 00:15:43.961 { 00:15:43.961 "code": -19, 00:15:43.961 "message": "No such device" 00:15:43.961 } 00:15:43.961 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:15:43.961 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:43.961 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:43.961 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:43.961 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:44.226 aio_bdev 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_name=0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_timeout= 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local i 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # [[ -z '' ]] 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # bdev_timeout=2000 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:44.226 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 -t 2000 00:15:44.488 [ 00:15:44.488 { 00:15:44.488 "name": "0ff94741-1203-4b6c-bb4c-dfe9fc73c340", 00:15:44.488 "aliases": [ 00:15:44.488 "lvs/lvol" 00:15:44.488 ], 00:15:44.488 "product_name": "Logical Volume", 00:15:44.488 "block_size": 4096, 00:15:44.488 "num_blocks": 38912, 00:15:44.488 "uuid": "0ff94741-1203-4b6c-bb4c-dfe9fc73c340", 00:15:44.488 "assigned_rate_limits": { 00:15:44.488 "rw_ios_per_sec": 0, 00:15:44.488 "rw_mbytes_per_sec": 0, 00:15:44.488 "r_mbytes_per_sec": 0, 00:15:44.488 "w_mbytes_per_sec": 0 00:15:44.488 }, 00:15:44.488 "claimed": false, 00:15:44.488 "zoned": false, 00:15:44.488 "supported_io_types": { 00:15:44.488 "read": true, 00:15:44.488 "write": true, 00:15:44.488 "unmap": true, 00:15:44.488 "write_zeroes": true, 00:15:44.488 "flush": false, 00:15:44.488 "reset": true, 00:15:44.488 "compare": false, 00:15:44.488 "compare_and_write": false, 00:15:44.488 "abort": false, 00:15:44.488 "nvme_admin": false, 00:15:44.488 "nvme_io": false 00:15:44.488 }, 00:15:44.488 "driver_specific": { 00:15:44.488 "lvol": { 00:15:44.488 "lvol_store_uuid": "671a0b94-b503-414c-8322-d96401e6b5dc", 00:15:44.488 "base_bdev": "aio_bdev", 00:15:44.488 "thin_provision": false, 00:15:44.488 "num_allocated_clusters": 38, 00:15:44.488 "snapshot": false, 00:15:44.488 "clone": false, 00:15:44.488 "esnap_clone": false 00:15:44.488 } 00:15:44.488 } 00:15:44.488 } 00:15:44.488 ] 00:15:44.488 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # return 0 00:15:44.488 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:44.488 08:54:06 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:44.745 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:44.745 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:44.745 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:44.745 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:44.745 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ff94741-1203-4b6c-bb4c-dfe9fc73c340 00:15:45.003 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 671a0b94-b503-414c-8322-d96401e6b5dc 00:15:45.261 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:45.261 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:45.261 00:15:45.261 real 0m17.331s 00:15:45.261 user 0m45.502s 00:15:45.261 sys 0m2.776s 00:15:45.261 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:45.262 ************************************ 00:15:45.262 END TEST lvs_grow_dirty 00:15:45.262 ************************************ 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # type=--id 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # id=0 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # for n in $shm_files 00:15:45.262 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:45.262 nvmf_trace.0 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # return 0 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:45.520 rmmod nvme_rdma 00:15:45.520 rmmod nvme_fabrics 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1306005 ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1306005 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@949 -- # '[' -z 1306005 ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # kill -0 1306005 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # uname 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1306005 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1306005' 00:15:45.520 killing process with pid 1306005 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # kill 1306005 00:15:45.520 08:54:07 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # wait 1306005 00:15:45.779 08:54:08 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:45.779 08:54:08 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:45.779 00:15:45.779 real 0m40.048s 00:15:45.779 user 1m6.951s 00:15:45.779 sys 0m8.311s 00:15:45.779 08:54:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:45.779 08:54:08 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:45.779 ************************************ 00:15:45.779 END TEST nvmf_lvs_grow 00:15:45.779 ************************************ 00:15:45.779 08:54:08 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:15:45.779 08:54:08 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:45.779 08:54:08 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:45.779 08:54:08 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:45.779 ************************************ 00:15:45.779 START TEST nvmf_bdev_io_wait 00:15:45.779 ************************************ 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:15:45.779 * Looking for test storage... 00:15:45.779 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:45.779 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:45.780 08:54:08 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.041 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:51.042 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:51.042 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # modinfo irdma 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:51.042 Found net devices under 0000:af:00.0: cvl_0_0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:51.042 Found net devices under 0000:af:00.1: cvl_0_1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:51.042 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:51.042 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:51.042 altname enp175s0f0np0 00:15:51.042 altname ens801f0np0 00:15:51.042 inet 192.168.100.8/24 scope global cvl_0_0 00:15:51.042 valid_lft forever preferred_lft forever 00:15:51.042 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:51.042 valid_lft forever preferred_lft forever 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:51.042 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:51.042 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:51.042 altname enp175s0f1np1 00:15:51.042 altname ens801f1np1 00:15:51.042 inet 192.168.100.9/24 scope global cvl_0_1 00:15:51.042 valid_lft forever preferred_lft forever 00:15:51.042 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:51.042 valid_lft forever preferred_lft forever 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:51.042 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:51.043 192.168.100.9' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:51.043 192.168.100.9' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:51.043 192.168.100.9' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1309943 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1309943 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # '[' -z 1309943 ']' 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:51.043 08:54:13 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.043 [2024-06-09 08:54:13.420187] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:51.043 [2024-06-09 08:54:13.420233] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.043 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.043 [2024-06-09 08:54:13.474804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.043 [2024-06-09 08:54:13.551228] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.043 [2024-06-09 08:54:13.551264] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.043 [2024-06-09 08:54:13.551271] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.043 [2024-06-09 08:54:13.551277] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.043 [2024-06-09 08:54:13.551282] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.043 [2024-06-09 08:54:13.551325] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.043 [2024-06-09 08:54:13.551434] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.043 [2024-06-09 08:54:13.551525] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.043 [2024-06-09 08:54:13.551526] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@863 -- # return 0 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@729 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 [2024-06-09 08:54:14.337151] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x601910/0x600f50) succeed. 00:15:51.977 [2024-06-09 08:54:14.345781] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x602c80/0x6014d0) succeed. 00:15:51.977 [2024-06-09 08:54:14.345812] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 Malloc0 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:51.977 [2024-06-09 08:54:14.406902] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1310189 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1310191 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:51.977 { 00:15:51.977 "params": { 00:15:51.977 "name": "Nvme$subsystem", 00:15:51.977 "trtype": "$TEST_TRANSPORT", 00:15:51.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.977 "adrfam": "ipv4", 00:15:51.977 "trsvcid": "$NVMF_PORT", 00:15:51.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.977 "hdgst": ${hdgst:-false}, 00:15:51.977 "ddgst": ${ddgst:-false} 00:15:51.977 }, 00:15:51.977 "method": "bdev_nvme_attach_controller" 00:15:51.977 } 00:15:51.977 EOF 00:15:51.977 )") 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1310193 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:51.977 { 00:15:51.977 "params": { 00:15:51.977 "name": "Nvme$subsystem", 00:15:51.977 "trtype": "$TEST_TRANSPORT", 00:15:51.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.977 "adrfam": "ipv4", 00:15:51.977 "trsvcid": "$NVMF_PORT", 00:15:51.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.977 "hdgst": ${hdgst:-false}, 00:15:51.977 "ddgst": ${ddgst:-false} 00:15:51.977 }, 00:15:51.977 "method": "bdev_nvme_attach_controller" 00:15:51.977 } 00:15:51.977 EOF 00:15:51.977 )") 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1310196 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:51.977 { 00:15:51.977 "params": { 00:15:51.977 "name": "Nvme$subsystem", 00:15:51.977 "trtype": "$TEST_TRANSPORT", 00:15:51.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.977 "adrfam": "ipv4", 00:15:51.977 "trsvcid": "$NVMF_PORT", 00:15:51.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.977 "hdgst": ${hdgst:-false}, 00:15:51.977 "ddgst": ${ddgst:-false} 00:15:51.977 }, 00:15:51.977 "method": "bdev_nvme_attach_controller" 00:15:51.977 } 00:15:51.977 EOF 00:15:51.977 )") 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:51.977 { 00:15:51.977 "params": { 00:15:51.977 "name": "Nvme$subsystem", 00:15:51.977 "trtype": "$TEST_TRANSPORT", 00:15:51.977 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:51.977 "adrfam": "ipv4", 00:15:51.977 "trsvcid": "$NVMF_PORT", 00:15:51.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:51.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:51.977 "hdgst": ${hdgst:-false}, 00:15:51.977 "ddgst": ${ddgst:-false} 00:15:51.977 }, 00:15:51.977 "method": "bdev_nvme_attach_controller" 00:15:51.977 } 00:15:51.977 EOF 00:15:51.977 )") 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:51.977 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1310189 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:51.978 "params": { 00:15:51.978 "name": "Nvme1", 00:15:51.978 "trtype": "rdma", 00:15:51.978 "traddr": "192.168.100.8", 00:15:51.978 "adrfam": "ipv4", 00:15:51.978 "trsvcid": "4420", 00:15:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.978 "hdgst": false, 00:15:51.978 "ddgst": false 00:15:51.978 }, 00:15:51.978 "method": "bdev_nvme_attach_controller" 00:15:51.978 }' 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:51.978 "params": { 00:15:51.978 "name": "Nvme1", 00:15:51.978 "trtype": "rdma", 00:15:51.978 "traddr": "192.168.100.8", 00:15:51.978 "adrfam": "ipv4", 00:15:51.978 "trsvcid": "4420", 00:15:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.978 "hdgst": false, 00:15:51.978 "ddgst": false 00:15:51.978 }, 00:15:51.978 "method": "bdev_nvme_attach_controller" 00:15:51.978 }' 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:51.978 "params": { 00:15:51.978 "name": "Nvme1", 00:15:51.978 "trtype": "rdma", 00:15:51.978 "traddr": "192.168.100.8", 00:15:51.978 "adrfam": "ipv4", 00:15:51.978 "trsvcid": "4420", 00:15:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.978 "hdgst": false, 00:15:51.978 "ddgst": false 00:15:51.978 }, 00:15:51.978 "method": "bdev_nvme_attach_controller" 00:15:51.978 }' 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:51.978 08:54:14 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:51.978 "params": { 00:15:51.978 "name": "Nvme1", 00:15:51.978 "trtype": "rdma", 00:15:51.978 "traddr": "192.168.100.8", 00:15:51.978 "adrfam": "ipv4", 00:15:51.978 "trsvcid": "4420", 00:15:51.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:51.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:51.978 "hdgst": false, 00:15:51.978 "ddgst": false 00:15:51.978 }, 00:15:51.978 "method": "bdev_nvme_attach_controller" 00:15:51.978 }' 00:15:51.978 [2024-06-09 08:54:14.441694] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:51.978 [2024-06-09 08:54:14.441749] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:51.978 [2024-06-09 08:54:14.452554] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:51.978 [2024-06-09 08:54:14.452593] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:51.978 [2024-06-09 08:54:14.455070] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:51.978 [2024-06-09 08:54:14.455109] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:51.978 [2024-06-09 08:54:14.456542] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:51.978 [2024-06-09 08:54:14.456585] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:51.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.237 [2024-06-09 08:54:14.605249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.237 [2024-06-09 08:54:14.687146] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:15:52.237 [2024-06-09 08:54:14.696943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.237 [2024-06-09 08:54:14.776236] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:15:52.237 [2024-06-09 08:54:14.789969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.495 [2024-06-09 08:54:14.848845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.495 [2024-06-09 08:54:14.877498] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 7 00:15:52.495 [2024-06-09 08:54:14.926946] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:15:52.495 Running I/O for 1 seconds... 00:15:52.495 Running I/O for 1 seconds... 00:15:52.495 Running I/O for 1 seconds... 00:15:52.754 Running I/O for 1 seconds... 00:15:53.689 00:15:53.689 Latency(us) 00:15:53.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.689 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:53.689 Nvme1n1 : 1.01 16515.36 64.51 0.00 0.00 7726.08 5398.92 23967.45 00:15:53.689 =================================================================================================================== 00:15:53.689 Total : 16515.36 64.51 0.00 0.00 7726.08 5398.92 23967.45 00:15:53.689 00:15:53.689 Latency(us) 00:15:53.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.689 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:53.689 Nvme1n1 : 1.01 14352.02 56.06 0.00 0.00 8889.45 5554.96 20222.54 00:15:53.689 =================================================================================================================== 00:15:53.689 Total : 14352.02 56.06 0.00 0.00 8889.45 5554.96 20222.54 00:15:53.689 00:15:53.689 Latency(us) 00:15:53.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.689 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:53.689 Nvme1n1 : 1.00 17585.61 68.69 0.00 0.00 7262.65 3432.84 18225.25 00:15:53.689 =================================================================================================================== 00:15:53.689 Total : 17585.61 68.69 0.00 0.00 7262.65 3432.84 18225.25 00:15:53.689 00:15:53.689 Latency(us) 00:15:53.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.689 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:53.689 Nvme1n1 : 1.00 255450.26 997.85 0.00 0.00 499.38 210.65 1950.48 00:15:53.689 =================================================================================================================== 00:15:53.689 Total : 255450.26 997.85 0.00 0.00 499.38 210.65 1950.48 00:15:53.689 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1310191 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1310193 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1310196 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:15:53.948 rmmod nvme_rdma 00:15:53.948 rmmod nvme_fabrics 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1309943 ']' 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1309943 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@949 -- # '[' -z 1309943 ']' 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # kill -0 1309943 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # uname 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:15:53.948 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1309943 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1309943' 00:15:54.207 killing process with pid 1309943 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # kill 1309943 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # wait 1309943 00:15:54.207 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:54.208 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:15:54.208 00:15:54.208 real 0m8.541s 00:15:54.208 user 0m19.928s 00:15:54.208 sys 0m4.937s 00:15:54.208 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # xtrace_disable 00:15:54.208 08:54:16 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:54.208 ************************************ 00:15:54.208 END TEST nvmf_bdev_io_wait 00:15:54.208 ************************************ 00:15:54.208 08:54:16 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:54.208 08:54:16 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:15:54.208 08:54:16 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:15:54.208 08:54:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:15:54.466 ************************************ 00:15:54.466 START TEST nvmf_queue_depth 00:15:54.466 ************************************ 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:15:54.466 * Looking for test storage... 00:15:54.466 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:54.466 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:54.467 08:54:16 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:59.736 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:59.737 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:59.737 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@377 -- # modinfo irdma 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:59.737 Found net devices under 0000:af:00.0: cvl_0_0 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:59.737 Found net devices under 0000:af:00.1: cvl_0_1 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:59.737 08:54:21 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:15:59.737 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:15:59.737 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:59.737 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:15:59.737 altname enp175s0f0np0 00:15:59.737 altname ens801f0np0 00:15:59.737 inet 192.168.100.8/24 scope global cvl_0_0 00:15:59.738 valid_lft forever preferred_lft forever 00:15:59.738 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:15:59.738 valid_lft forever preferred_lft forever 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:15:59.738 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:15:59.738 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:15:59.738 altname enp175s0f1np1 00:15:59.738 altname ens801f1np1 00:15:59.738 inet 192.168.100.9/24 scope global cvl_0_1 00:15:59.738 valid_lft forever preferred_lft forever 00:15:59.738 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:15:59.738 valid_lft forever preferred_lft forever 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_0 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:15:59.738 192.168.100.9' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:15:59.738 192.168.100.9' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:15:59.738 192.168.100.9' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@723 -- # xtrace_disable 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1313666 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1313666 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1313666 ']' 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:15:59.738 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:59.738 [2024-06-09 08:54:22.191315] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:15:59.738 [2024-06-09 08:54:22.191359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.738 [2024-06-09 08:54:22.245459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.996 [2024-06-09 08:54:22.321763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:59.996 [2024-06-09 08:54:22.321797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:59.996 [2024-06-09 08:54:22.321803] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:59.996 [2024-06-09 08:54:22.321812] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:59.996 [2024-06-09 08:54:22.321816] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:59.996 [2024-06-09 08:54:22.321857] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.564 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:00.564 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:00.564 08:54:22 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:00.564 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:00.564 08:54:22 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 [2024-06-09 08:54:23.043310] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1e4caf0/0x1e4c130) succeed. 00:16:00.564 [2024-06-09 08:54:23.051662] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1e4dda0/0x1e4c6b0) succeed. 00:16:00.564 [2024-06-09 08:54:23.051683] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 Malloc0 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.564 [2024-06-09 08:54:23.101321] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1313730 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1313730 /var/tmp/bdevperf.sock 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # '[' -z 1313730 ']' 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:00.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:00.564 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:00.823 [2024-06-09 08:54:23.147716] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:00.823 [2024-06-09 08:54:23.147764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313730 ] 00:16:00.823 EAL: No free 2048 kB hugepages reported on node 1 00:16:00.823 [2024-06-09 08:54:23.201049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.823 [2024-06-09 08:54:23.280103] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.410 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:01.410 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@863 -- # return 0 00:16:01.410 08:54:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:01.410 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:01.410 08:54:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:01.668 NVMe0n1 00:16:01.668 08:54:24 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:01.668 08:54:24 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:01.668 Running I/O for 10 seconds... 00:16:11.645 00:16:11.645 Latency(us) 00:16:11.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.645 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:11.645 Verification LBA range: start 0x0 length 0x4000 00:16:11.645 NVMe0n1 : 10.04 17343.03 67.75 0.00 0.00 58900.16 22594.32 37199.48 00:16:11.645 =================================================================================================================== 00:16:11.645 Total : 17343.03 67.75 0.00 0.00 58900.16 22594.32 37199.48 00:16:11.645 0 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1313730 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1313730 ']' 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1313730 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1313730 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1313730' 00:16:11.903 killing process with pid 1313730 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1313730 00:16:11.903 Received shutdown signal, test time was about 10.000000 seconds 00:16:11.903 00:16:11.903 Latency(us) 00:16:11.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.903 =================================================================================================================== 00:16:11.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1313730 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:11.903 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:11.903 rmmod nvme_rdma 00:16:11.903 rmmod nvme_fabrics 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1313666 ']' 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1313666 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@949 -- # '[' -z 1313666 ']' 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@953 -- # kill -0 1313666 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # uname 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1313666 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1313666' 00:16:12.162 killing process with pid 1313666 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@968 -- # kill 1313666 00:16:12.162 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@973 -- # wait 1313666 00:16:12.435 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:12.435 00:16:12.435 real 0m17.970s 00:16:12.435 user 0m25.745s 00:16:12.435 sys 0m4.535s 00:16:12.435 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:12.435 08:54:34 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:12.435 ************************************ 00:16:12.435 END TEST nvmf_queue_depth 00:16:12.435 ************************************ 00:16:12.435 08:54:34 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:16:12.435 08:54:34 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:12.435 08:54:34 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:12.435 08:54:34 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:12.435 ************************************ 00:16:12.435 START TEST nvmf_target_multipath 00:16:12.435 ************************************ 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:16:12.435 * Looking for test storage... 00:16:12.435 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.435 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.436 08:54:34 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.708 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:17.709 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:17.709 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@377 -- # modinfo irdma 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:17.709 Found net devices under 0000:af:00.0: cvl_0_0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:17.709 Found net devices under 0000:af:00.1: cvl_0_1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:17.709 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:17.709 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:17.709 altname enp175s0f0np0 00:16:17.709 altname ens801f0np0 00:16:17.709 inet 192.168.100.8/24 scope global cvl_0_0 00:16:17.709 valid_lft forever preferred_lft forever 00:16:17.709 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:17.709 valid_lft forever preferred_lft forever 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:17.709 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:17.709 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:17.709 altname enp175s0f1np1 00:16:17.709 altname ens801f1np1 00:16:17.709 inet 192.168.100.9/24 scope global cvl_0_1 00:16:17.709 valid_lft forever preferred_lft forever 00:16:17.709 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:17.709 valid_lft forever preferred_lft forever 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:17.709 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:17.710 192.168.100.9' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:17.710 192.168.100.9' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:17.710 192.168.100.9' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:17.710 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:16:17.970 run this test only with TCP transport for now 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:17.970 rmmod nvme_rdma 00:16:17.970 rmmod nvme_fabrics 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:17.970 00:16:17.970 real 0m5.495s 00:16:17.970 user 0m1.582s 00:16:17.970 sys 0m4.041s 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:17.970 08:54:40 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:17.970 ************************************ 00:16:17.970 END TEST nvmf_target_multipath 00:16:17.970 ************************************ 00:16:17.970 08:54:40 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:16:17.970 08:54:40 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:17.970 08:54:40 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:17.970 08:54:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:17.970 ************************************ 00:16:17.970 START TEST nvmf_zcopy 00:16:17.970 ************************************ 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:16:17.971 * Looking for test storage... 00:16:17.971 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.971 08:54:40 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:23.246 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:23.246 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@377 -- # modinfo irdma 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:23.246 Found net devices under 0000:af:00.0: cvl_0_0 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:23.246 Found net devices under 0000:af:00.1: cvl_0_1 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:16:23.246 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:23.247 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:23.247 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:23.247 altname enp175s0f0np0 00:16:23.247 altname ens801f0np0 00:16:23.247 inet 192.168.100.8/24 scope global cvl_0_0 00:16:23.247 valid_lft forever preferred_lft forever 00:16:23.247 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:23.247 valid_lft forever preferred_lft forever 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:23.247 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:23.247 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:23.247 altname enp175s0f1np1 00:16:23.247 altname ens801f1np1 00:16:23.247 inet 192.168.100.9/24 scope global cvl_0_1 00:16:23.247 valid_lft forever preferred_lft forever 00:16:23.247 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:23.247 valid_lft forever preferred_lft forever 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:23.247 192.168.100.9' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:23.247 192.168.100.9' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:23.247 192.168.100.9' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1321810 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1321810 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # '[' -z 1321810 ']' 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:23.247 08:54:45 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:23.247 [2024-06-09 08:54:45.727467] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:23.247 [2024-06-09 08:54:45.727515] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.247 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.247 [2024-06-09 08:54:45.782226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.507 [2024-06-09 08:54:45.854873] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.507 [2024-06-09 08:54:45.854911] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.507 [2024-06-09 08:54:45.854918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.507 [2024-06-09 08:54:45.854924] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.507 [2024-06-09 08:54:45.854928] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.507 [2024-06-09 08:54:45.854946] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@863 -- # return 0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:16:24.075 Unsupported transport: rdma 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@807 -- # type=--id 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # id=0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@819 -- # for n in $shm_files 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:24.075 nvmf_trace.0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@822 -- # return 0 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:24.075 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:24.075 rmmod nvme_rdma 00:16:24.075 rmmod nvme_fabrics 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1321810 ']' 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@949 -- # '[' -z 1321810 ']' 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@953 -- # kill -0 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # uname 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1321810' 00:16:24.334 killing process with pid 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@968 -- # kill 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@973 -- # wait 1321810 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:24.334 00:16:24.334 real 0m6.469s 00:16:24.334 user 0m2.907s 00:16:24.334 sys 0m4.161s 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:24.334 08:54:46 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:24.334 ************************************ 00:16:24.334 END TEST nvmf_zcopy 00:16:24.334 ************************************ 00:16:24.594 08:54:46 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:16:24.594 08:54:46 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:24.594 08:54:46 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:24.594 08:54:46 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:24.594 ************************************ 00:16:24.594 START TEST nvmf_nmic 00:16:24.594 ************************************ 00:16:24.594 08:54:46 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:16:24.594 * Looking for test storage... 00:16:24.594 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:24.594 08:54:47 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:29.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:29.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@377 -- # modinfo irdma 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:29.869 Found net devices under 0000:af:00.0: cvl_0_0 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:29.869 Found net devices under 0000:af:00.1: cvl_0_1 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:29.869 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:29.870 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:29.870 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:29.870 altname enp175s0f0np0 00:16:29.870 altname ens801f0np0 00:16:29.870 inet 192.168.100.8/24 scope global cvl_0_0 00:16:29.870 valid_lft forever preferred_lft forever 00:16:29.870 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:29.870 valid_lft forever preferred_lft forever 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:29.870 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:29.870 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:29.870 altname enp175s0f1np1 00:16:29.870 altname ens801f1np1 00:16:29.870 inet 192.168.100.9/24 scope global cvl_0_1 00:16:29.870 valid_lft forever preferred_lft forever 00:16:29.870 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:29.870 valid_lft forever preferred_lft forever 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:29.870 192.168.100.9' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:29.870 192.168.100.9' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:29.870 192.168.100.9' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1324912 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1324912 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # '[' -z 1324912 ']' 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:29.870 08:54:52 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:29.870 [2024-06-09 08:54:52.374324] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:29.870 [2024-06-09 08:54:52.374368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.870 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.129 [2024-06-09 08:54:52.430683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.129 [2024-06-09 08:54:52.512175] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.129 [2024-06-09 08:54:52.512223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.129 [2024-06-09 08:54:52.512230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.129 [2024-06-09 08:54:52.512236] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.129 [2024-06-09 08:54:52.512241] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.129 [2024-06-09 08:54:52.512292] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.129 [2024-06-09 08:54:52.512387] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.129 [2024-06-09 08:54:52.512474] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.129 [2024-06-09 08:54:52.512475] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@863 -- # return 0 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.695 [2024-06-09 08:54:53.237253] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x19f18f0/0x19f0f30) succeed. 00:16:30.695 [2024-06-09 08:54:53.246095] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x19f2ca0/0x19f14b0) succeed. 00:16:30.695 [2024-06-09 08:54:53.246122] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.695 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 Malloc0 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 [2024-06-09 08:54:53.301151] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:30.952 test case1: single bdev can't be used in multiple subsystems 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 [2024-06-09 08:54:53.325186] bdev.c:8035:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:30.952 [2024-06-09 08:54:53.325203] subsystem.c:2066:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:30.952 [2024-06-09 08:54:53.325210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:30.952 request: 00:16:30.952 { 00:16:30.952 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:30.952 "namespace": { 00:16:30.952 "bdev_name": "Malloc0", 00:16:30.952 "no_auto_visible": false 00:16:30.952 }, 00:16:30.952 "method": "nvmf_subsystem_add_ns", 00:16:30.952 "req_id": 1 00:16:30.952 } 00:16:30.952 Got JSON-RPC error response 00:16:30.952 response: 00:16:30.952 { 00:16:30.952 "code": -32602, 00:16:30.952 "message": "Invalid parameters" 00:16:30.952 } 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:30.952 Adding namespace failed - expected result. 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:30.952 test case2: host connect to nvmf target in multiple paths 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:30.952 [2024-06-09 08:54:53.337251] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:30.952 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:31.209 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:16:31.466 08:54:53 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.466 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1197 -- # local i=0 00:16:31.466 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.466 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:16:31.466 08:54:53 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1204 -- # sleep 2 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1207 -- # return 0 00:16:33.366 08:54:55 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:33.366 [global] 00:16:33.366 thread=1 00:16:33.366 invalidate=1 00:16:33.366 rw=write 00:16:33.366 time_based=1 00:16:33.366 runtime=1 00:16:33.366 ioengine=libaio 00:16:33.366 direct=1 00:16:33.366 bs=4096 00:16:33.366 iodepth=1 00:16:33.366 norandommap=0 00:16:33.366 numjobs=1 00:16:33.366 00:16:33.366 verify_dump=1 00:16:33.366 verify_backlog=512 00:16:33.366 verify_state_save=0 00:16:33.366 do_verify=1 00:16:33.366 verify=crc32c-intel 00:16:33.367 [job0] 00:16:33.367 filename=/dev/nvme0n1 00:16:33.367 Could not set queue depth (nvme0n1) 00:16:33.625 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:33.625 fio-3.35 00:16:33.625 Starting 1 thread 00:16:34.999 00:16:34.999 job0: (groupid=0, jobs=1): err= 0: pid=1325712: Sun Jun 9 08:54:57 2024 00:16:34.999 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:16:34.999 slat (nsec): min=5706, max=30617, avg=7413.01, stdev=1098.06 00:16:34.999 clat (usec): min=47, max=102, avg=63.30, stdev= 3.71 00:16:34.999 lat (usec): min=61, max=109, avg=70.72, stdev= 3.87 00:16:34.999 clat percentiles (usec): 00:16:34.999 | 1.00th=[ 57], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 61], 00:16:35.000 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:16:35.000 | 70.00th=[ 66], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 70], 00:16:35.000 | 99.00th=[ 74], 99.50th=[ 75], 99.90th=[ 80], 99.95th=[ 82], 00:16:35.000 | 99.99th=[ 102] 00:16:35.000 write: IOPS=7022, BW=27.4MiB/s (28.8MB/s)(27.5MiB/1001msec); 0 zone resets 00:16:35.000 slat (nsec): min=8367, max=38339, avg=9534.45, stdev=1431.62 00:16:35.000 clat (nsec): min=46026, max=88616, avg=61906.33, stdev=3681.59 00:16:35.000 lat (usec): min=62, max=126, avg=71.44, stdev= 4.07 00:16:35.000 clat percentiles (nsec): 00:16:35.000 | 1.00th=[55040], 5.00th=[56576], 10.00th=[57600], 20.00th=[58624], 00:16:35.000 | 30.00th=[59648], 40.00th=[60672], 50.00th=[61696], 60.00th=[62720], 00:16:35.000 | 70.00th=[63744], 80.00th=[64768], 90.00th=[67072], 95.00th=[68096], 00:16:35.000 | 99.00th=[72192], 99.50th=[73216], 99.90th=[78336], 99.95th=[83456], 00:16:35.000 | 99.99th=[88576] 00:16:35.000 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:16:35.000 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:16:35.000 lat (usec) : 50=0.03%, 100=99.96%, 250=0.01% 00:16:35.000 cpu : usr=8.50%, sys=14.50%, ctx=13687, majf=0, minf=2 00:16:35.000 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.000 issued rwts: total=6656,7030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.000 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:35.000 00:16:35.000 Run status group 0 (all jobs): 00:16:35.000 READ: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:16:35.000 WRITE: bw=27.4MiB/s (28.8MB/s), 27.4MiB/s-27.4MiB/s (28.8MB/s-28.8MB/s), io=27.5MiB (28.8MB), run=1001-1001msec 00:16:35.000 00:16:35.000 Disk stats (read/write): 00:16:35.000 nvme0n1: ios=6194/6144, merge=0/0, ticks=362/341, in_queue=703, util=90.88% 00:16:35.000 08:54:57 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1218 -- # local i=0 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1230 -- # return 0 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:36.903 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:16:36.904 rmmod nvme_rdma 00:16:36.904 rmmod nvme_fabrics 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1324912 ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@949 -- # '[' -z 1324912 ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@953 -- # kill -0 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # uname 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1324912' 00:16:36.904 killing process with pid 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@968 -- # kill 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@973 -- # wait 1324912 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:16:36.904 00:16:36.904 real 0m12.425s 00:16:36.904 user 0m34.629s 00:16:36.904 sys 0m4.743s 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1125 -- # xtrace_disable 00:16:36.904 08:54:59 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:36.904 ************************************ 00:16:36.904 END TEST nvmf_nmic 00:16:36.904 ************************************ 00:16:36.904 08:54:59 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:36.904 08:54:59 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:16:36.904 08:54:59 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:16:36.904 08:54:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:36.904 ************************************ 00:16:36.904 START TEST nvmf_fio_target 00:16:36.904 ************************************ 00:16:36.904 08:54:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:16:37.163 * Looking for test storage... 00:16:37.163 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.163 08:54:59 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.164 08:54:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:42.434 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:42.434 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@377 -- # modinfo irdma 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:42.434 Found net devices under 0000:af:00.0: cvl_0_0 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.434 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:42.435 Found net devices under 0000:af:00.1: cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:16:42.435 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:42.435 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:16:42.435 altname enp175s0f0np0 00:16:42.435 altname ens801f0np0 00:16:42.435 inet 192.168.100.8/24 scope global cvl_0_0 00:16:42.435 valid_lft forever preferred_lft forever 00:16:42.435 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:16:42.435 valid_lft forever preferred_lft forever 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:16:42.435 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:16:42.435 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:16:42.435 altname enp175s0f1np1 00:16:42.435 altname ens801f1np1 00:16:42.435 inet 192.168.100.9/24 scope global cvl_0_1 00:16:42.435 valid_lft forever preferred_lft forever 00:16:42.435 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:16:42.435 valid_lft forever preferred_lft forever 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:16:42.435 192.168.100.9' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:16:42.435 192.168.100.9' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:16:42.435 192.168.100.9' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1329221 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1329221 00:16:42.435 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # '[' -z 1329221 ']' 00:16:42.436 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.436 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:16:42.436 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.436 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:16:42.436 08:55:04 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.436 [2024-06-09 08:55:04.872773] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:16:42.436 [2024-06-09 08:55:04.872818] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.436 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.436 [2024-06-09 08:55:04.929277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:42.694 [2024-06-09 08:55:05.003797] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.694 [2024-06-09 08:55:05.003835] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.695 [2024-06-09 08:55:05.003842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.695 [2024-06-09 08:55:05.003848] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.695 [2024-06-09 08:55:05.003853] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.695 [2024-06-09 08:55:05.003893] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.695 [2024-06-09 08:55:05.003994] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.695 [2024-06-09 08:55:05.004015] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:16:42.695 [2024-06-09 08:55:05.004016] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@863 -- # return 0 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.260 08:55:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:43.518 [2024-06-09 08:55:05.880549] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x10c48f0/0x10c3f30) succeed. 00:16:43.518 [2024-06-09 08:55:05.889452] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x10c5ca0/0x10c44b0) succeed. 00:16:43.518 [2024-06-09 08:55:05.889474] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:16:43.518 08:55:05 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.776 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:43.776 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:43.776 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:43.776 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.035 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:44.035 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.294 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:44.294 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:44.553 08:55:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.553 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:44.553 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.812 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:44.812 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:45.071 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:45.071 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:45.329 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:45.329 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:45.329 08:55:07 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:45.588 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:45.588 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:45.846 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:45.846 [2024-06-09 08:55:08.361698] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:45.846 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:46.104 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:46.362 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local i=0 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # [[ -n 4 ]] 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # nvme_device_counter=4 00:16:46.621 08:55:08 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1204 -- # sleep 2 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1206 -- # nvme_devices=4 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1207 -- # return 0 00:16:48.524 08:55:10 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:48.524 [global] 00:16:48.524 thread=1 00:16:48.524 invalidate=1 00:16:48.524 rw=write 00:16:48.524 time_based=1 00:16:48.524 runtime=1 00:16:48.524 ioengine=libaio 00:16:48.524 direct=1 00:16:48.524 bs=4096 00:16:48.524 iodepth=1 00:16:48.524 norandommap=0 00:16:48.524 numjobs=1 00:16:48.524 00:16:48.524 verify_dump=1 00:16:48.524 verify_backlog=512 00:16:48.524 verify_state_save=0 00:16:48.524 do_verify=1 00:16:48.524 verify=crc32c-intel 00:16:48.524 [job0] 00:16:48.524 filename=/dev/nvme0n1 00:16:48.524 [job1] 00:16:48.524 filename=/dev/nvme0n2 00:16:48.524 [job2] 00:16:48.524 filename=/dev/nvme0n3 00:16:48.524 [job3] 00:16:48.524 filename=/dev/nvme0n4 00:16:48.524 Could not set queue depth (nvme0n1) 00:16:48.524 Could not set queue depth (nvme0n2) 00:16:48.524 Could not set queue depth (nvme0n3) 00:16:48.524 Could not set queue depth (nvme0n4) 00:16:48.782 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.782 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.782 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.782 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:48.782 fio-3.35 00:16:48.782 Starting 4 threads 00:16:50.166 00:16:50.166 job0: (groupid=0, jobs=1): err= 0: pid=1330537: Sun Jun 9 08:55:12 2024 00:16:50.166 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:16:50.166 slat (nsec): min=6658, max=30988, avg=8009.33, stdev=1362.44 00:16:50.166 clat (usec): min=75, max=197, avg=125.26, stdev=14.18 00:16:50.166 lat (usec): min=84, max=205, avg=133.27, stdev=14.01 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 84], 5.00th=[ 94], 10.00th=[ 109], 20.00th=[ 118], 00:16:50.166 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:16:50.166 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 145], 00:16:50.166 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 186], 99.95th=[ 194], 00:16:50.166 | 99.99th=[ 198] 00:16:50.166 write: IOPS=3956, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1001msec); 0 zone resets 00:16:50.166 slat (nsec): min=8659, max=41113, avg=10486.18, stdev=1863.48 00:16:50.166 clat (usec): min=66, max=178, avg=116.51, stdev=14.24 00:16:50.166 lat (usec): min=77, max=188, avg=127.00, stdev=14.26 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 80], 5.00th=[ 87], 10.00th=[ 94], 20.00th=[ 109], 00:16:50.166 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 122], 00:16:50.166 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 135], 00:16:50.166 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 178], 00:16:50.166 | 99.99th=[ 180] 00:16:50.166 bw ( KiB/s): min=16384, max=16384, per=23.53%, avg=16384.00, stdev= 0.00, samples=1 00:16:50.166 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:50.166 lat (usec) : 100=10.01%, 250=89.99% 00:16:50.166 cpu : usr=5.70%, sys=8.10%, ctx=7544, majf=0, minf=1 00:16:50.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 issued rwts: total=3584,3960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.166 job1: (groupid=0, jobs=1): err= 0: pid=1330538: Sun Jun 9 08:55:12 2024 00:16:50.166 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:16:50.166 slat (nsec): min=6766, max=38259, avg=8542.86, stdev=2108.20 00:16:50.166 clat (usec): min=66, max=206, avg=93.58, stdev=16.43 00:16:50.166 lat (usec): min=77, max=214, avg=102.13, stdev=16.73 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:16:50.166 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 91], 00:16:50.166 | 70.00th=[ 95], 80.00th=[ 102], 90.00th=[ 124], 95.00th=[ 130], 00:16:50.166 | 99.00th=[ 139], 99.50th=[ 143], 99.90th=[ 153], 99.95th=[ 186], 00:16:50.166 | 99.99th=[ 208] 00:16:50.166 write: IOPS=4896, BW=19.1MiB/s (20.1MB/s)(19.1MiB/1001msec); 0 zone resets 00:16:50.166 slat (nsec): min=8520, max=38580, avg=10887.17, stdev=2519.38 00:16:50.166 clat (usec): min=65, max=356, avg=92.24, stdev=17.87 00:16:50.166 lat (usec): min=78, max=366, avg=103.12, stdev=18.09 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:16:50.166 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 90], 00:16:50.166 | 70.00th=[ 93], 80.00th=[ 102], 90.00th=[ 119], 95.00th=[ 131], 00:16:50.166 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 190], 99.95th=[ 196], 00:16:50.166 | 99.99th=[ 355] 00:16:50.166 bw ( KiB/s): min=18624, max=18624, per=26.75%, avg=18624.00, stdev= 0.00, samples=1 00:16:50.166 iops : min= 4656, max= 4656, avg=4656.00, stdev= 0.00, samples=1 00:16:50.166 lat (usec) : 100=78.48%, 250=21.51%, 500=0.01% 00:16:50.166 cpu : usr=5.80%, sys=11.40%, ctx=9509, majf=0, minf=2 00:16:50.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 issued rwts: total=4608,4901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.166 job2: (groupid=0, jobs=1): err= 0: pid=1330539: Sun Jun 9 08:55:12 2024 00:16:50.166 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:16:50.166 slat (nsec): min=6626, max=23523, avg=7614.86, stdev=831.27 00:16:50.166 clat (usec): min=78, max=196, avg=109.23, stdev=19.07 00:16:50.166 lat (usec): min=86, max=204, avg=116.85, stdev=19.12 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:16:50.166 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 120], 00:16:50.166 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 133], 95.00th=[ 137], 00:16:50.166 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 188], 00:16:50.166 | 99.99th=[ 198] 00:16:50.166 write: IOPS=4327, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1001msec); 0 zone resets 00:16:50.166 slat (nsec): min=8419, max=34518, avg=9654.59, stdev=1031.25 00:16:50.166 clat (usec): min=74, max=193, avg=106.49, stdev=18.72 00:16:50.166 lat (usec): min=83, max=203, avg=116.15, stdev=18.88 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:16:50.166 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 102], 60.00th=[ 115], 00:16:50.166 | 70.00th=[ 120], 80.00th=[ 125], 90.00th=[ 131], 95.00th=[ 137], 00:16:50.166 | 99.00th=[ 151], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 184], 00:16:50.166 | 99.99th=[ 194] 00:16:50.166 bw ( KiB/s): min=18224, max=18224, per=26.17%, avg=18224.00, stdev= 0.00, samples=1 00:16:50.166 iops : min= 4556, max= 4556, avg=4556.00, stdev= 0.00, samples=1 00:16:50.166 lat (usec) : 100=46.70%, 250=53.30% 00:16:50.166 cpu : usr=5.50%, sys=9.10%, ctx=8428, majf=0, minf=1 00:16:50.166 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.166 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.166 issued rwts: total=4096,4332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.166 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.166 job3: (groupid=0, jobs=1): err= 0: pid=1330540: Sun Jun 9 08:55:12 2024 00:16:50.166 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:16:50.166 slat (nsec): min=6768, max=28720, avg=7657.39, stdev=965.80 00:16:50.166 clat (usec): min=77, max=192, avg=112.96, stdev=19.93 00:16:50.166 lat (usec): min=85, max=199, avg=120.62, stdev=19.99 00:16:50.166 clat percentiles (usec): 00:16:50.166 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 92], 00:16:50.166 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 118], 60.00th=[ 124], 00:16:50.166 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 143], 00:16:50.166 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 174], 99.95th=[ 176], 00:16:50.166 | 99.99th=[ 192] 00:16:50.167 write: IOPS=4228, BW=16.5MiB/s (17.3MB/s)(16.5MiB/1001msec); 0 zone resets 00:16:50.167 slat (nsec): min=8621, max=34119, avg=9703.43, stdev=1034.17 00:16:50.167 clat (usec): min=74, max=166, avg=105.61, stdev=16.90 00:16:50.167 lat (usec): min=85, max=198, avg=115.31, stdev=17.03 00:16:50.167 clat percentiles (usec): 00:16:50.167 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:16:50.167 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 114], 00:16:50.167 | 70.00th=[ 118], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 133], 00:16:50.167 | 99.00th=[ 145], 99.50th=[ 151], 99.90th=[ 157], 99.95th=[ 163], 00:16:50.167 | 99.99th=[ 167] 00:16:50.167 bw ( KiB/s): min=16384, max=16384, per=23.53%, avg=16384.00, stdev= 0.00, samples=1 00:16:50.167 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:16:50.167 lat (usec) : 100=42.51%, 250=57.49% 00:16:50.167 cpu : usr=4.50%, sys=10.00%, ctx=8329, majf=0, minf=1 00:16:50.167 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:50.167 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.167 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.167 issued rwts: total=4096,4233,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.167 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:50.167 00:16:50.167 Run status group 0 (all jobs): 00:16:50.167 READ: bw=63.9MiB/s (67.0MB/s), 14.0MiB/s-18.0MiB/s (14.7MB/s-18.9MB/s), io=64.0MiB (67.1MB), run=1001-1001msec 00:16:50.167 WRITE: bw=68.0MiB/s (71.3MB/s), 15.5MiB/s-19.1MiB/s (16.2MB/s-20.1MB/s), io=68.1MiB (71.4MB), run=1001-1001msec 00:16:50.167 00:16:50.167 Disk stats (read/write): 00:16:50.167 nvme0n1: ios=3122/3387, merge=0/0, ticks=380/366, in_queue=746, util=86.57% 00:16:50.167 nvme0n2: ios=3871/4096, merge=0/0, ticks=334/352, in_queue=686, util=86.69% 00:16:50.167 nvme0n3: ios=3584/3760, merge=0/0, ticks=360/380, in_queue=740, util=88.85% 00:16:50.167 nvme0n4: ios=3296/3584, merge=0/0, ticks=384/364, in_queue=748, util=89.60% 00:16:50.167 08:55:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:50.167 [global] 00:16:50.167 thread=1 00:16:50.167 invalidate=1 00:16:50.167 rw=randwrite 00:16:50.167 time_based=1 00:16:50.167 runtime=1 00:16:50.167 ioengine=libaio 00:16:50.167 direct=1 00:16:50.167 bs=4096 00:16:50.167 iodepth=1 00:16:50.167 norandommap=0 00:16:50.167 numjobs=1 00:16:50.167 00:16:50.167 verify_dump=1 00:16:50.167 verify_backlog=512 00:16:50.167 verify_state_save=0 00:16:50.167 do_verify=1 00:16:50.167 verify=crc32c-intel 00:16:50.167 [job0] 00:16:50.167 filename=/dev/nvme0n1 00:16:50.167 [job1] 00:16:50.167 filename=/dev/nvme0n2 00:16:50.167 [job2] 00:16:50.167 filename=/dev/nvme0n3 00:16:50.167 [job3] 00:16:50.167 filename=/dev/nvme0n4 00:16:50.167 Could not set queue depth (nvme0n1) 00:16:50.167 Could not set queue depth (nvme0n2) 00:16:50.167 Could not set queue depth (nvme0n3) 00:16:50.167 Could not set queue depth (nvme0n4) 00:16:50.494 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.494 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.494 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.494 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.494 fio-3.35 00:16:50.494 Starting 4 threads 00:16:51.897 00:16:51.897 job0: (groupid=0, jobs=1): err= 0: pid=1330912: Sun Jun 9 08:55:14 2024 00:16:51.897 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:51.897 slat (nsec): min=6644, max=21044, avg=7594.34, stdev=762.77 00:16:51.897 clat (usec): min=77, max=184, avg=144.70, stdev= 5.89 00:16:51.897 lat (usec): min=84, max=192, avg=152.29, stdev= 5.85 00:16:51.897 clat percentiles (usec): 00:16:51.897 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:16:51.897 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:16:51.897 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 151], 95.00th=[ 153], 00:16:51.897 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 180], 99.95th=[ 186], 00:16:51.897 | 99.99th=[ 186] 00:16:51.897 write: IOPS=3530, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1001msec); 0 zone resets 00:16:51.897 slat (nsec): min=7835, max=38747, avg=9459.34, stdev=1126.36 00:16:51.897 clat (usec): min=69, max=229, avg=136.90, stdev= 7.50 00:16:51.897 lat (usec): min=78, max=237, avg=146.36, stdev= 7.48 00:16:51.897 clat percentiles (usec): 00:16:51.897 | 1.00th=[ 113], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 135], 00:16:51.897 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:16:51.897 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:16:51.897 | 99.00th=[ 155], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 186], 00:16:51.897 | 99.99th=[ 229] 00:16:51.897 bw ( KiB/s): min=14128, max=14128, per=25.12%, avg=14128.00, stdev= 0.00, samples=1 00:16:51.897 iops : min= 3532, max= 3532, avg=3532.00, stdev= 0.00, samples=1 00:16:51.897 lat (usec) : 100=0.61%, 250=99.39% 00:16:51.897 cpu : usr=3.70%, sys=8.00%, ctx=6606, majf=0, minf=1 00:16:51.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.897 issued rwts: total=3072,3534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.897 job1: (groupid=0, jobs=1): err= 0: pid=1330914: Sun Jun 9 08:55:14 2024 00:16:51.897 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:51.897 slat (nsec): min=6484, max=23435, avg=7551.68, stdev=732.77 00:16:51.897 clat (usec): min=74, max=183, avg=144.84, stdev= 6.44 00:16:51.897 lat (usec): min=81, max=191, avg=152.39, stdev= 6.42 00:16:51.897 clat percentiles (usec): 00:16:51.897 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:16:51.897 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:16:51.897 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 151], 95.00th=[ 153], 00:16:51.897 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 172], 99.95th=[ 180], 00:16:51.897 | 99.99th=[ 184] 00:16:51.897 write: IOPS=3518, BW=13.7MiB/s (14.4MB/s)(13.8MiB/1001msec); 0 zone resets 00:16:51.897 slat (nsec): min=8174, max=37040, avg=9616.64, stdev=1160.10 00:16:51.897 clat (usec): min=74, max=223, avg=137.14, stdev= 7.67 00:16:51.897 lat (usec): min=83, max=233, avg=146.75, stdev= 7.71 00:16:51.897 clat percentiles (usec): 00:16:51.897 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:16:51.897 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:16:51.897 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 147], 00:16:51.897 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 196], 99.95th=[ 200], 00:16:51.897 | 99.99th=[ 225] 00:16:51.897 bw ( KiB/s): min=14048, max=14048, per=24.97%, avg=14048.00, stdev= 0.00, samples=1 00:16:51.897 iops : min= 3512, max= 3512, avg=3512.00, stdev= 0.00, samples=1 00:16:51.897 lat (usec) : 100=0.47%, 250=99.53% 00:16:51.897 cpu : usr=5.60%, sys=6.00%, ctx=6594, majf=0, minf=2 00:16:51.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 issued rwts: total=3072,3522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.898 job2: (groupid=0, jobs=1): err= 0: pid=1330915: Sun Jun 9 08:55:14 2024 00:16:51.898 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:16:51.898 slat (nsec): min=6797, max=17322, avg=7880.32, stdev=646.58 00:16:51.898 clat (usec): min=78, max=194, avg=144.70, stdev= 6.20 00:16:51.898 lat (usec): min=86, max=202, avg=152.58, stdev= 6.19 00:16:51.898 clat percentiles (usec): 00:16:51.898 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:16:51.898 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 145], 60.00th=[ 147], 00:16:51.898 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 151], 95.00th=[ 153], 00:16:51.898 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 190], 00:16:51.898 | 99.99th=[ 194] 00:16:51.898 write: IOPS=3509, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1002msec); 0 zone resets 00:16:51.898 slat (nsec): min=8541, max=34614, avg=9930.66, stdev=1027.85 00:16:51.898 clat (usec): min=75, max=199, avg=136.88, stdev= 7.43 00:16:51.898 lat (usec): min=86, max=212, avg=146.81, stdev= 7.43 00:16:51.898 clat percentiles (usec): 00:16:51.898 | 1.00th=[ 123], 5.00th=[ 130], 10.00th=[ 131], 20.00th=[ 133], 00:16:51.898 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:16:51.898 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:16:51.898 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 188], 99.95th=[ 198], 00:16:51.898 | 99.99th=[ 200] 00:16:51.898 bw ( KiB/s): min=14032, max=14104, per=25.01%, avg=14068.00, stdev=50.91, samples=2 00:16:51.898 iops : min= 3508, max= 3526, avg=3517.00, stdev=12.73, samples=2 00:16:51.898 lat (usec) : 100=0.32%, 250=99.68% 00:16:51.898 cpu : usr=4.50%, sys=7.19%, ctx=6590, majf=0, minf=1 00:16:51.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 issued rwts: total=3072,3517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.898 job3: (groupid=0, jobs=1): err= 0: pid=1330916: Sun Jun 9 08:55:14 2024 00:16:51.898 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:16:51.898 slat (nsec): min=6399, max=20392, avg=7914.98, stdev=989.51 00:16:51.898 clat (usec): min=79, max=194, avg=144.60, stdev= 5.96 00:16:51.898 lat (usec): min=87, max=202, avg=152.52, stdev= 5.92 00:16:51.898 clat percentiles (usec): 00:16:51.898 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:16:51.898 | 30.00th=[ 143], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 147], 00:16:51.898 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 151], 95.00th=[ 153], 00:16:51.898 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 190], 00:16:51.898 | 99.99th=[ 196] 00:16:51.898 write: IOPS=3510, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1002msec); 0 zone resets 00:16:51.898 slat (nsec): min=8003, max=34881, avg=9917.72, stdev=1298.83 00:16:51.898 clat (usec): min=76, max=208, avg=136.85, stdev= 7.68 00:16:51.898 lat (usec): min=86, max=218, avg=146.77, stdev= 7.67 00:16:51.898 clat percentiles (usec): 00:16:51.898 | 1.00th=[ 123], 5.00th=[ 129], 10.00th=[ 131], 20.00th=[ 133], 00:16:51.898 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 139], 00:16:51.898 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 147], 00:16:51.898 | 99.00th=[ 163], 99.50th=[ 176], 99.90th=[ 186], 99.95th=[ 200], 00:16:51.898 | 99.99th=[ 210] 00:16:51.898 bw ( KiB/s): min=14040, max=14104, per=25.02%, avg=14072.00, stdev=45.25, samples=2 00:16:51.898 iops : min= 3510, max= 3526, avg=3518.00, stdev=11.31, samples=2 00:16:51.898 lat (usec) : 100=0.38%, 250=99.62% 00:16:51.898 cpu : usr=3.40%, sys=8.49%, ctx=6591, majf=0, minf=1 00:16:51.898 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:51.898 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:51.898 issued rwts: total=3072,3518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:51.898 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:51.898 00:16:51.898 Run status group 0 (all jobs): 00:16:51.898 READ: bw=47.9MiB/s (50.2MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1001-1002msec 00:16:51.898 WRITE: bw=54.9MiB/s (57.6MB/s), 13.7MiB/s-13.8MiB/s (14.4MB/s-14.5MB/s), io=55.0MiB (57.7MB), run=1001-1002msec 00:16:51.898 00:16:51.898 Disk stats (read/write): 00:16:51.898 nvme0n1: ios=2627/3072, merge=0/0, ticks=363/389, in_queue=752, util=86.67% 00:16:51.898 nvme0n2: ios=2566/3072, merge=0/0, ticks=356/398, in_queue=754, util=86.90% 00:16:51.898 nvme0n3: ios=2561/3072, merge=0/0, ticks=353/392, in_queue=745, util=89.07% 00:16:51.898 nvme0n4: ios=2562/3072, merge=0/0, ticks=347/400, in_queue=747, util=89.73% 00:16:51.898 08:55:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:51.898 [global] 00:16:51.898 thread=1 00:16:51.898 invalidate=1 00:16:51.898 rw=write 00:16:51.898 time_based=1 00:16:51.898 runtime=1 00:16:51.898 ioengine=libaio 00:16:51.898 direct=1 00:16:51.898 bs=4096 00:16:51.898 iodepth=128 00:16:51.898 norandommap=0 00:16:51.898 numjobs=1 00:16:51.898 00:16:51.898 verify_dump=1 00:16:51.898 verify_backlog=512 00:16:51.898 verify_state_save=0 00:16:51.898 do_verify=1 00:16:51.898 verify=crc32c-intel 00:16:51.898 [job0] 00:16:51.898 filename=/dev/nvme0n1 00:16:51.898 [job1] 00:16:51.898 filename=/dev/nvme0n2 00:16:51.898 [job2] 00:16:51.898 filename=/dev/nvme0n3 00:16:51.898 [job3] 00:16:51.898 filename=/dev/nvme0n4 00:16:51.898 Could not set queue depth (nvme0n1) 00:16:51.898 Could not set queue depth (nvme0n2) 00:16:51.898 Could not set queue depth (nvme0n3) 00:16:51.898 Could not set queue depth (nvme0n4) 00:16:51.898 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.898 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.898 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.898 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:51.898 fio-3.35 00:16:51.898 Starting 4 threads 00:16:53.273 00:16:53.273 job0: (groupid=0, jobs=1): err= 0: pid=1331281: Sun Jun 9 08:55:15 2024 00:16:53.273 read: IOPS=9746, BW=38.1MiB/s (39.9MB/s)(38.1MiB/1001msec) 00:16:53.273 slat (nsec): min=1414, max=4174.9k, avg=50325.72, stdev=191448.22 00:16:53.273 clat (usec): min=731, max=20385, avg=6523.89, stdev=1622.49 00:16:53.273 lat (usec): min=1629, max=20406, avg=6574.21, stdev=1625.67 00:16:53.273 clat percentiles (usec): 00:16:53.273 | 1.00th=[ 4817], 5.00th=[ 5342], 10.00th=[ 5407], 20.00th=[ 5538], 00:16:53.273 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 6325], 60.00th=[ 6849], 00:16:53.273 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7373], 00:16:53.273 | 99.00th=[13435], 99.50th=[16188], 99.90th=[17171], 99.95th=[17957], 00:16:53.273 | 99.99th=[20317] 00:16:53.273 write: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(40.0MiB/1001msec); 0 zone resets 00:16:53.273 slat (usec): min=2, max=1588, avg=47.09, stdev=170.34 00:16:53.273 clat (usec): min=2096, max=13057, avg=6167.35, stdev=1418.31 00:16:53.273 lat (usec): min=2103, max=13457, avg=6214.43, stdev=1420.00 00:16:53.273 clat percentiles (usec): 00:16:53.273 | 1.00th=[ 4555], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5211], 00:16:53.273 | 30.00th=[ 5276], 40.00th=[ 5407], 50.00th=[ 5735], 60.00th=[ 6456], 00:16:53.273 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7308], 00:16:53.273 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12649], 99.95th=[12780], 00:16:53.273 | 99.99th=[13042] 00:16:53.273 bw ( KiB/s): min=35472, max=35472, per=36.47%, avg=35472.00, stdev= 0.00, samples=1 00:16:53.273 iops : min= 8868, max= 8868, avg=8868.00, stdev= 0.00, samples=1 00:16:53.273 lat (usec) : 750=0.01% 00:16:53.273 lat (msec) : 2=0.06%, 4=0.29%, 10=95.27%, 20=4.37%, 50=0.01% 00:16:53.273 cpu : usr=3.40%, sys=6.80%, ctx=1359, majf=0, minf=1 00:16:53.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:16:53.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.273 issued rwts: total=9756,10240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.273 job1: (groupid=0, jobs=1): err= 0: pid=1331282: Sun Jun 9 08:55:15 2024 00:16:53.273 read: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1004msec) 00:16:53.273 slat (nsec): min=1494, max=2436.6k, avg=127027.56, stdev=325107.23 00:16:53.273 clat (usec): min=3569, max=22190, avg=16222.30, stdev=2226.27 00:16:53.273 lat (usec): min=5168, max=23877, avg=16349.32, stdev=2234.14 00:16:53.273 clat percentiles (usec): 00:16:53.273 | 1.00th=[ 9634], 5.00th=[12387], 10.00th=[14353], 20.00th=[14615], 00:16:53.273 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[16057], 00:16:53.273 | 70.00th=[18220], 80.00th=[18482], 90.00th=[19006], 95.00th=[19268], 00:16:53.273 | 99.00th=[20317], 99.50th=[20841], 99.90th=[22152], 99.95th=[22152], 00:16:53.273 | 99.99th=[22152] 00:16:53.274 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:53.274 slat (usec): min=2, max=2645, avg=120.61, stdev=316.85 00:16:53.274 clat (usec): min=8175, max=21047, avg=15577.47, stdev=2536.01 00:16:53.274 lat (usec): min=8182, max=21056, avg=15698.09, stdev=2558.13 00:16:53.274 clat percentiles (usec): 00:16:53.274 | 1.00th=[10683], 5.00th=[11076], 10.00th=[11600], 20.00th=[13960], 00:16:53.274 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:16:53.274 | 70.00th=[17957], 80.00th=[18482], 90.00th=[18744], 95.00th=[19006], 00:16:53.274 | 99.00th=[19268], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:16:53.274 | 99.99th=[21103] 00:16:53.274 bw ( KiB/s): min=14680, max=18088, per=16.84%, avg=16384.00, stdev=2409.82, samples=2 00:16:53.274 iops : min= 3670, max= 4522, avg=4096.00, stdev=602.45, samples=2 00:16:53.274 lat (msec) : 4=0.01%, 10=0.75%, 20=98.06%, 50=1.18% 00:16:53.274 cpu : usr=1.50%, sys=2.89%, ctx=1180, majf=0, minf=1 00:16:53.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.274 issued rwts: total=3898,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.274 job2: (groupid=0, jobs=1): err= 0: pid=1331283: Sun Jun 9 08:55:15 2024 00:16:53.274 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:16:53.274 slat (nsec): min=1581, max=5411.7k, avg=86718.64, stdev=308307.45 00:16:53.274 clat (usec): min=5868, max=22869, avg=11231.69, stdev=4481.00 00:16:53.274 lat (usec): min=6637, max=22874, avg=11318.41, stdev=4512.16 00:16:53.274 clat percentiles (usec): 00:16:53.274 | 1.00th=[ 6718], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 8356], 00:16:53.274 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8586], 60.00th=[ 8717], 00:16:53.274 | 70.00th=[11469], 80.00th=[18220], 90.00th=[18744], 95.00th=[19006], 00:16:53.274 | 99.00th=[19268], 99.50th=[19268], 99.90th=[21103], 99.95th=[21365], 00:16:53.274 | 99.99th=[22938] 00:16:53.274 write: IOPS=5959, BW=23.3MiB/s (24.4MB/s)(23.4MiB/1004msec); 0 zone resets 00:16:53.274 slat (usec): min=2, max=3251, avg=83.15, stdev=277.59 00:16:53.274 clat (usec): min=3497, max=22180, avg=10649.24, stdev=4337.04 00:16:53.274 lat (usec): min=5000, max=22189, avg=10732.39, stdev=4366.04 00:16:53.274 clat percentiles (usec): 00:16:53.274 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 7635], 20.00th=[ 7963], 00:16:53.274 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:16:53.274 | 70.00th=[10814], 80.00th=[17695], 90.00th=[18744], 95.00th=[19006], 00:16:53.274 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21365], 99.95th=[21627], 00:16:53.274 | 99.99th=[22152] 00:16:53.274 bw ( KiB/s): min=16928, max=29920, per=24.08%, avg=23424.00, stdev=9186.73, samples=2 00:16:53.274 iops : min= 4232, max= 7480, avg=5856.00, stdev=2296.68, samples=2 00:16:53.274 lat (msec) : 4=0.01%, 10=68.85%, 20=30.68%, 50=0.46% 00:16:53.274 cpu : usr=2.89%, sys=3.09%, ctx=1157, majf=0, minf=1 00:16:53.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:16:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.274 issued rwts: total=5632,5983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.274 job3: (groupid=0, jobs=1): err= 0: pid=1331284: Sun Jun 9 08:55:15 2024 00:16:53.274 read: IOPS=3670, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1004msec) 00:16:53.274 slat (nsec): min=1421, max=4511.9k, avg=130004.76, stdev=430199.15 00:16:53.274 clat (usec): min=3532, max=23404, avg=16653.94, stdev=2349.88 00:16:53.274 lat (usec): min=4306, max=23413, avg=16783.95, stdev=2384.06 00:16:53.274 clat percentiles (usec): 00:16:53.274 | 1.00th=[ 9634], 5.00th=[12387], 10.00th=[15008], 20.00th=[15401], 00:16:53.274 | 30.00th=[15664], 40.00th=[15795], 50.00th=[16188], 60.00th=[16909], 00:16:53.274 | 70.00th=[18482], 80.00th=[19006], 90.00th=[19268], 95.00th=[19530], 00:16:53.274 | 99.00th=[21365], 99.50th=[22676], 99.90th=[23200], 99.95th=[23200], 00:16:53.274 | 99.99th=[23462] 00:16:53.274 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:16:53.274 slat (usec): min=2, max=4739, avg=124.63, stdev=421.90 00:16:53.274 clat (usec): min=8871, max=23318, avg=16021.81, stdev=2430.03 00:16:53.274 lat (usec): min=8879, max=23329, avg=16146.43, stdev=2470.91 00:16:53.274 clat percentiles (usec): 00:16:53.274 | 1.00th=[10814], 5.00th=[11469], 10.00th=[11994], 20.00th=[14746], 00:16:53.274 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15664], 60.00th=[16188], 00:16:53.274 | 70.00th=[18220], 80.00th=[18482], 90.00th=[18744], 95.00th=[18744], 00:16:53.274 | 99.00th=[21365], 99.50th=[22152], 99.90th=[22938], 99.95th=[22938], 00:16:53.274 | 99.99th=[23200] 00:16:53.274 bw ( KiB/s): min=15752, max=16808, per=16.74%, avg=16280.00, stdev=746.70, samples=2 00:16:53.274 iops : min= 3938, max= 4202, avg=4070.00, stdev=186.68, samples=2 00:16:53.274 lat (msec) : 4=0.01%, 10=0.76%, 20=97.57%, 50=1.66% 00:16:53.274 cpu : usr=1.40%, sys=3.09%, ctx=917, majf=0, minf=1 00:16:53.274 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:53.274 issued rwts: total=3685,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:53.274 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:53.274 00:16:53.274 Run status group 0 (all jobs): 00:16:53.274 READ: bw=89.4MiB/s (93.7MB/s), 14.3MiB/s-38.1MiB/s (15.0MB/s-39.9MB/s), io=89.7MiB (94.1MB), run=1001-1004msec 00:16:53.274 WRITE: bw=95.0MiB/s (99.6MB/s), 15.9MiB/s-40.0MiB/s (16.7MB/s-41.9MB/s), io=95.4MiB (100MB), run=1001-1004msec 00:16:53.274 00:16:53.274 Disk stats (read/write): 00:16:53.274 nvme0n1: ios=7730/8026, merge=0/0, ticks=17252/16813, in_queue=34065, util=83.27% 00:16:53.274 nvme0n2: ios=3151/3584, merge=0/0, ticks=16388/17779, in_queue=34167, util=83.95% 00:16:53.274 nvme0n3: ios=5120/5231, merge=0/0, ticks=17584/16410, in_queue=33994, util=87.79% 00:16:53.274 nvme0n4: ios=3072/3425, merge=0/0, ticks=16567/17409, in_queue=33976, util=89.34% 00:16:53.274 08:55:15 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:53.274 [global] 00:16:53.274 thread=1 00:16:53.274 invalidate=1 00:16:53.274 rw=randwrite 00:16:53.274 time_based=1 00:16:53.274 runtime=1 00:16:53.274 ioengine=libaio 00:16:53.274 direct=1 00:16:53.274 bs=4096 00:16:53.274 iodepth=128 00:16:53.274 norandommap=0 00:16:53.274 numjobs=1 00:16:53.274 00:16:53.274 verify_dump=1 00:16:53.274 verify_backlog=512 00:16:53.274 verify_state_save=0 00:16:53.274 do_verify=1 00:16:53.274 verify=crc32c-intel 00:16:53.274 [job0] 00:16:53.274 filename=/dev/nvme0n1 00:16:53.274 [job1] 00:16:53.274 filename=/dev/nvme0n2 00:16:53.274 [job2] 00:16:53.274 filename=/dev/nvme0n3 00:16:53.274 [job3] 00:16:53.274 filename=/dev/nvme0n4 00:16:53.274 Could not set queue depth (nvme0n1) 00:16:53.274 Could not set queue depth (nvme0n2) 00:16:53.274 Could not set queue depth (nvme0n3) 00:16:53.274 Could not set queue depth (nvme0n4) 00:16:53.533 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:53.533 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:53.533 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:53.533 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:53.533 fio-3.35 00:16:53.533 Starting 4 threads 00:16:54.910 00:16:54.910 job0: (groupid=0, jobs=1): err= 0: pid=1331670: Sun Jun 9 08:55:17 2024 00:16:54.910 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:16:54.910 slat (nsec): min=1406, max=2413.9k, avg=73904.13, stdev=275615.30 00:16:54.910 clat (usec): min=8281, max=12082, avg=9607.94, stdev=428.59 00:16:54.910 lat (usec): min=8593, max=12092, avg=9681.84, stdev=466.60 00:16:54.910 clat percentiles (usec): 00:16:54.910 | 1.00th=[ 8717], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9372], 00:16:54.910 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9634], 00:16:54.910 | 70.00th=[ 9765], 80.00th=[ 9765], 90.00th=[ 9896], 95.00th=[10421], 00:16:54.910 | 99.00th=[11207], 99.50th=[11338], 99.90th=[11731], 99.95th=[11863], 00:16:54.910 | 99.99th=[12125] 00:16:54.910 write: IOPS=6798, BW=26.6MiB/s (27.8MB/s)(26.6MiB/1003msec); 0 zone resets 00:16:54.910 slat (nsec): min=1911, max=2256.4k, avg=72186.04, stdev=260332.66 00:16:54.910 clat (usec): min=1943, max=11603, avg=9242.72, stdev=643.68 00:16:54.910 lat (usec): min=2654, max=12017, avg=9314.91, stdev=667.28 00:16:54.910 clat percentiles (usec): 00:16:54.910 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 8979], 00:16:54.910 | 30.00th=[ 9110], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:16:54.910 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[10028], 00:16:54.910 | 99.00th=[10945], 99.50th=[11076], 99.90th=[11469], 99.95th=[11469], 00:16:54.910 | 99.99th=[11600] 00:16:54.910 bw ( KiB/s): min=25224, max=28304, per=30.67%, avg=26764.00, stdev=2177.89, samples=2 00:16:54.910 iops : min= 6306, max= 7076, avg=6691.00, stdev=544.47, samples=2 00:16:54.910 lat (msec) : 2=0.01%, 4=0.12%, 10=93.30%, 20=6.58% 00:16:54.910 cpu : usr=1.90%, sys=4.59%, ctx=1137, majf=0, minf=1 00:16:54.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:54.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.911 issued rwts: total=6656,6819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.911 job1: (groupid=0, jobs=1): err= 0: pid=1331682: Sun Jun 9 08:55:17 2024 00:16:54.911 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:54.911 slat (nsec): min=1487, max=2655.9k, avg=96982.61, stdev=365008.00 00:16:54.911 clat (usec): min=6720, max=15809, avg=12499.87, stdev=606.31 00:16:54.911 lat (usec): min=6722, max=15811, avg=12596.85, stdev=608.42 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[10159], 5.00th=[11863], 10.00th=[11994], 20.00th=[12256], 00:16:54.911 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:16:54.911 | 70.00th=[12780], 80.00th=[12911], 90.00th=[13042], 95.00th=[13173], 00:16:54.911 | 99.00th=[13435], 99.50th=[13566], 99.90th=[14484], 99.95th=[14615], 00:16:54.911 | 99.99th=[15795] 00:16:54.911 write: IOPS=5146, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1003msec); 0 zone resets 00:16:54.911 slat (nsec): min=1968, max=2599.5k, avg=95071.54, stdev=343510.72 00:16:54.911 clat (usec): min=1283, max=14814, avg=12174.88, stdev=884.71 00:16:54.911 lat (usec): min=3457, max=14924, avg=12269.96, stdev=871.23 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[ 9765], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:16:54.911 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:16:54.911 | 70.00th=[12387], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:16:54.911 | 99.00th=[13304], 99.50th=[14091], 99.90th=[14484], 99.95th=[14615], 00:16:54.911 | 99.99th=[14877] 00:16:54.911 bw ( KiB/s): min=20480, max=20480, per=23.47%, avg=20480.00, stdev= 0.00, samples=2 00:16:54.911 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:54.911 lat (msec) : 2=0.01%, 4=0.25%, 10=0.68%, 20=99.06% 00:16:54.911 cpu : usr=1.10%, sys=4.49%, ctx=959, majf=0, minf=1 00:16:54.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.911 issued rwts: total=5120,5162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.911 job2: (groupid=0, jobs=1): err= 0: pid=1331701: Sun Jun 9 08:55:17 2024 00:16:54.911 read: IOPS=5370, BW=21.0MiB/s (22.0MB/s)(21.1MiB/1004msec) 00:16:54.911 slat (nsec): min=1475, max=2756.0k, avg=91207.98, stdev=340140.44 00:16:54.911 clat (usec): min=3036, max=14605, avg=11695.32, stdev=847.36 00:16:54.911 lat (usec): min=3038, max=14614, avg=11786.52, stdev=877.23 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[ 7046], 5.00th=[10945], 10.00th=[11076], 20.00th=[11469], 00:16:54.911 | 30.00th=[11600], 40.00th=[11600], 50.00th=[11731], 60.00th=[11731], 00:16:54.911 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[12780], 00:16:54.911 | 99.00th=[13698], 99.50th=[13829], 99.90th=[14484], 99.95th=[14615], 00:16:54.911 | 99.99th=[14615] 00:16:54.911 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:16:54.911 slat (nsec): min=1988, max=2566.3k, avg=87780.74, stdev=313411.99 00:16:54.911 clat (usec): min=8822, max=14117, avg=11345.45, stdev=508.61 00:16:54.911 lat (usec): min=8830, max=14130, avg=11433.23, stdev=549.51 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10683], 20.00th=[11076], 00:16:54.911 | 30.00th=[11207], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:16:54.911 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11731], 95.00th=[12256], 00:16:54.911 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13698], 99.95th=[14091], 00:16:54.911 | 99.99th=[14091] 00:16:54.911 bw ( KiB/s): min=22384, max=22672, per=25.82%, avg=22528.00, stdev=203.65, samples=2 00:16:54.911 iops : min= 5596, max= 5668, avg=5632.00, stdev=50.91, samples=2 00:16:54.911 lat (msec) : 4=0.11%, 10=0.99%, 20=98.90% 00:16:54.911 cpu : usr=2.09%, sys=3.59%, ctx=934, majf=0, minf=1 00:16:54.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.911 issued rwts: total=5392,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.911 job3: (groupid=0, jobs=1): err= 0: pid=1331706: Sun Jun 9 08:55:17 2024 00:16:54.911 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:16:54.911 slat (nsec): min=1479, max=2794.7k, avg=119644.89, stdev=438818.30 00:16:54.911 clat (usec): min=12020, max=18256, avg=15344.39, stdev=589.35 00:16:54.911 lat (usec): min=13505, max=18855, avg=15464.04, stdev=542.65 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[13173], 5.00th=[14746], 10.00th=[14746], 20.00th=[14877], 00:16:54.911 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:16:54.911 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16057], 95.00th=[16188], 00:16:54.911 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:16:54.911 | 99.99th=[18220] 00:16:54.911 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1004msec); 0 zone resets 00:16:54.911 slat (usec): min=2, max=3219, avg=116.58, stdev=432.57 00:16:54.911 clat (usec): min=1463, max=17910, avg=14905.89, stdev=1294.82 00:16:54.911 lat (usec): min=3994, max=18269, avg=15022.47, stdev=1268.66 00:16:54.911 clat percentiles (usec): 00:16:54.911 | 1.00th=[ 8094], 5.00th=[14091], 10.00th=[14353], 20.00th=[14615], 00:16:54.911 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15008], 60.00th=[15139], 00:16:54.911 | 70.00th=[15270], 80.00th=[15533], 90.00th=[15795], 95.00th=[15926], 00:16:54.911 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17171], 99.95th=[17695], 00:16:54.911 | 99.99th=[17957] 00:16:54.911 bw ( KiB/s): min=16384, max=16896, per=19.07%, avg=16640.00, stdev=362.04, samples=2 00:16:54.911 iops : min= 4096, max= 4224, avg=4160.00, stdev=90.51, samples=2 00:16:54.911 lat (msec) : 2=0.01%, 4=0.02%, 10=0.74%, 20=99.22% 00:16:54.911 cpu : usr=1.50%, sys=3.09%, ctx=768, majf=0, minf=1 00:16:54.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:54.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:54.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:54.911 issued rwts: total=4096,4288,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:54.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:54.911 00:16:54.911 Run status group 0 (all jobs): 00:16:54.911 READ: bw=82.7MiB/s (86.8MB/s), 15.9MiB/s-25.9MiB/s (16.7MB/s-27.2MB/s), io=83.1MiB (87.1MB), run=1003-1004msec 00:16:54.911 WRITE: bw=85.2MiB/s (89.3MB/s), 16.7MiB/s-26.6MiB/s (17.5MB/s-27.8MB/s), io=85.6MiB (89.7MB), run=1003-1004msec 00:16:54.911 00:16:54.911 Disk stats (read/write): 00:16:54.911 nvme0n1: ios=5682/5827, merge=0/0, ticks=17712/17550, in_queue=35262, util=86.57% 00:16:54.911 nvme0n2: ios=4144/4608, merge=0/0, ticks=12856/13917, in_queue=26773, util=86.70% 00:16:54.911 nvme0n3: ios=4608/4754, merge=0/0, ticks=17766/17448, in_queue=35214, util=88.87% 00:16:54.911 nvme0n4: ios=3527/3584, merge=0/0, ticks=13406/13325, in_queue=26731, util=89.61% 00:16:54.911 08:55:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:54.911 08:55:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1331885 00:16:54.911 08:55:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:54.911 08:55:17 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:54.911 [global] 00:16:54.911 thread=1 00:16:54.911 invalidate=1 00:16:54.911 rw=read 00:16:54.911 time_based=1 00:16:54.911 runtime=10 00:16:54.911 ioengine=libaio 00:16:54.911 direct=1 00:16:54.911 bs=4096 00:16:54.911 iodepth=1 00:16:54.911 norandommap=1 00:16:54.911 numjobs=1 00:16:54.911 00:16:54.911 [job0] 00:16:54.911 filename=/dev/nvme0n1 00:16:54.911 [job1] 00:16:54.911 filename=/dev/nvme0n2 00:16:54.911 [job2] 00:16:54.911 filename=/dev/nvme0n3 00:16:54.911 [job3] 00:16:54.911 filename=/dev/nvme0n4 00:16:54.911 Could not set queue depth (nvme0n1) 00:16:54.911 Could not set queue depth (nvme0n2) 00:16:54.911 Could not set queue depth (nvme0n3) 00:16:54.911 Could not set queue depth (nvme0n4) 00:16:55.169 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.170 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.170 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.170 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.170 fio-3.35 00:16:55.170 Starting 4 threads 00:16:57.700 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:57.956 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=84758528, buflen=4096 00:16:57.957 fio: pid=1332157, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:57.957 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:58.214 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=86851584, buflen=4096 00:16:58.214 fio: pid=1332146, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.214 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.214 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:58.472 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24252416, buflen=4096 00:16:58.472 fio: pid=1332122, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.472 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.472 08:55:20 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:58.472 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=33837056, buflen=4096 00:16:58.472 fio: pid=1332135, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:58.731 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.731 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:58.731 00:16:58.731 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1332122: Sun Jun 9 08:55:21 2024 00:16:58.731 read: IOPS=7209, BW=28.2MiB/s (29.5MB/s)(87.1MiB/3094msec) 00:16:58.731 slat (usec): min=5, max=11393, avg=10.37, stdev=126.44 00:16:58.731 clat (usec): min=55, max=24240, avg=125.79, stdev=163.33 00:16:58.731 lat (usec): min=62, max=24247, avg=136.16, stdev=206.57 00:16:58.731 clat percentiles (usec): 00:16:58.731 | 1.00th=[ 71], 5.00th=[ 80], 10.00th=[ 87], 20.00th=[ 111], 00:16:58.731 | 30.00th=[ 116], 40.00th=[ 120], 50.00th=[ 124], 60.00th=[ 128], 00:16:58.731 | 70.00th=[ 135], 80.00th=[ 147], 90.00th=[ 157], 95.00th=[ 163], 00:16:58.731 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 208], 99.95th=[ 235], 00:16:58.731 | 99.99th=[ 363] 00:16:58.731 bw ( KiB/s): min=24648, max=31048, per=26.40%, avg=28561.60, stdev=2563.14, samples=5 00:16:58.731 iops : min= 6162, max= 7762, avg=7140.40, stdev=640.79, samples=5 00:16:58.731 lat (usec) : 100=13.52%, 250=86.43%, 500=0.04% 00:16:58.731 lat (msec) : 50=0.01% 00:16:58.731 cpu : usr=2.97%, sys=9.51%, ctx=22312, majf=0, minf=1 00:16:58.731 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.731 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 issued rwts: total=22306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.732 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1332135: Sun Jun 9 08:55:21 2024 00:16:58.732 read: IOPS=7502, BW=29.3MiB/s (30.7MB/s)(96.3MiB/3285msec) 00:16:58.732 slat (usec): min=6, max=11889, avg=11.63, stdev=146.95 00:16:58.732 clat (usec): min=45, max=24927, avg=119.54, stdev=161.69 00:16:58.732 lat (usec): min=59, max=24939, avg=131.17, stdev=218.39 00:16:58.732 clat percentiles (usec): 00:16:58.732 | 1.00th=[ 59], 5.00th=[ 63], 10.00th=[ 69], 20.00th=[ 90], 00:16:58.732 | 30.00th=[ 112], 40.00th=[ 117], 50.00th=[ 122], 60.00th=[ 126], 00:16:58.732 | 70.00th=[ 133], 80.00th=[ 145], 90.00th=[ 157], 95.00th=[ 163], 00:16:58.732 | 99.00th=[ 182], 99.50th=[ 196], 99.90th=[ 219], 99.95th=[ 293], 00:16:58.732 | 99.99th=[ 996] 00:16:58.732 bw ( KiB/s): min=25280, max=30941, per=26.68%, avg=28862.17, stdev=2283.91, samples=6 00:16:58.732 iops : min= 6320, max= 7735, avg=7215.50, stdev=570.93, samples=6 00:16:58.732 lat (usec) : 50=0.01%, 100=22.08%, 250=77.84%, 500=0.03%, 750=0.01% 00:16:58.732 lat (usec) : 1000=0.01% 00:16:58.732 lat (msec) : 4=0.01%, 50=0.01% 00:16:58.732 cpu : usr=3.11%, sys=10.78%, ctx=24652, majf=0, minf=1 00:16:58.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 issued rwts: total=24646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.732 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1332146: Sun Jun 9 08:55:21 2024 00:16:58.732 read: IOPS=7289, BW=28.5MiB/s (29.9MB/s)(82.8MiB/2909msec) 00:16:58.732 slat (usec): min=6, max=15694, avg=11.03, stdev=131.07 00:16:58.732 clat (usec): min=74, max=548, avg=123.17, stdev=23.68 00:16:58.732 lat (usec): min=83, max=15817, avg=134.21, stdev=133.21 00:16:58.732 clat percentiles (usec): 00:16:58.732 | 1.00th=[ 83], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 96], 00:16:58.732 | 30.00th=[ 116], 40.00th=[ 123], 50.00th=[ 127], 60.00th=[ 130], 00:16:58.732 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 153], 95.00th=[ 161], 00:16:58.732 | 99.00th=[ 186], 99.50th=[ 200], 99.90th=[ 217], 99.95th=[ 223], 00:16:58.732 | 99.99th=[ 334] 00:16:58.732 bw ( KiB/s): min=27480, max=33136, per=27.63%, avg=29889.60, stdev=2614.81, samples=5 00:16:58.732 iops : min= 6870, max= 8284, avg=7472.40, stdev=653.70, samples=5 00:16:58.732 lat (usec) : 100=22.87%, 250=77.09%, 500=0.03%, 750=0.01% 00:16:58.732 cpu : usr=3.37%, sys=10.52%, ctx=21207, majf=0, minf=1 00:16:58.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 issued rwts: total=21205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.732 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1332157: Sun Jun 9 08:55:21 2024 00:16:58.732 read: IOPS=7653, BW=29.9MiB/s (31.3MB/s)(80.8MiB/2704msec) 00:16:58.732 slat (nsec): min=3665, max=75176, avg=6999.11, stdev=1540.24 00:16:58.732 clat (usec): min=68, max=783, avg=121.55, stdev=24.49 00:16:58.732 lat (usec): min=76, max=787, avg=128.55, stdev=25.36 00:16:58.732 clat percentiles (usec): 00:16:58.732 | 1.00th=[ 80], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 91], 00:16:58.732 | 30.00th=[ 115], 40.00th=[ 125], 50.00th=[ 128], 60.00th=[ 131], 00:16:58.732 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 151], 95.00th=[ 157], 00:16:58.732 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 217], 00:16:58.732 | 99.99th=[ 449] 00:16:58.732 bw ( KiB/s): min=27688, max=32904, per=27.75%, avg=30025.60, stdev=1895.62, samples=5 00:16:58.732 iops : min= 6922, max= 8226, avg=7506.40, stdev=473.90, samples=5 00:16:58.732 lat (usec) : 100=27.77%, 250=72.19%, 500=0.03%, 750=0.01%, 1000=0.01% 00:16:58.732 cpu : usr=1.70%, sys=6.88%, ctx=20695, majf=0, minf=2 00:16:58.732 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:58.732 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.732 issued rwts: total=20694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.732 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:58.732 00:16:58.732 Run status group 0 (all jobs): 00:16:58.732 READ: bw=106MiB/s (111MB/s), 28.2MiB/s-29.9MiB/s (29.5MB/s-31.3MB/s), io=347MiB (364MB), run=2704-3285msec 00:16:58.732 00:16:58.732 Disk stats (read/write): 00:16:58.732 nvme0n1: ios=20322/0, merge=0/0, ticks=2466/0, in_queue=2466, util=94.79% 00:16:58.732 nvme0n2: ios=22428/0, merge=0/0, ticks=2623/0, in_queue=2623, util=94.74% 00:16:58.732 nvme0n3: ios=20987/0, merge=0/0, ticks=2395/0, in_queue=2395, util=95.67% 00:16:58.732 nvme0n4: ios=19859/0, merge=0/0, ticks=2336/0, in_queue=2336, util=96.48% 00:16:58.732 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.732 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:58.991 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:58.991 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:59.250 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.250 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:59.507 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:59.507 08:55:21 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:59.507 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:59.507 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 1331885 00:16:59.507 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:59.507 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1218 -- # local i=0 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1230 -- # return 0 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:00.443 nvmf hotplug test: fio failed as expected 00:17:00.443 08:55:22 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:00.711 rmmod nvme_rdma 00:17:00.711 rmmod nvme_fabrics 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1329221 ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1329221 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@949 -- # '[' -z 1329221 ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@953 -- # kill -0 1329221 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # uname 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1329221 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1329221' 00:17:00.711 killing process with pid 1329221 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@968 -- # kill 1329221 00:17:00.711 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@973 -- # wait 1329221 00:17:00.970 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:00.970 08:55:23 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:00.970 00:17:00.970 real 0m23.959s 00:17:00.970 user 1m48.727s 00:17:00.970 sys 0m8.164s 00:17:00.970 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:00.970 08:55:23 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.970 ************************************ 00:17:00.970 END TEST nvmf_fio_target 00:17:00.970 ************************************ 00:17:00.970 08:55:23 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:00.970 08:55:23 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:00.970 08:55:23 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:00.970 08:55:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:00.970 ************************************ 00:17:00.970 START TEST nvmf_bdevio 00:17:00.970 ************************************ 00:17:00.970 08:55:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:17:01.229 * Looking for test storage... 00:17:01.229 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:01.229 08:55:23 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:06.497 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:06.497 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@377 -- # modinfo irdma 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:06.497 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:06.498 Found net devices under 0000:af:00.0: cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:06.498 Found net devices under 0000:af:00.1: cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:06.498 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:06.498 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:06.498 altname enp175s0f0np0 00:17:06.498 altname ens801f0np0 00:17:06.498 inet 192.168.100.8/24 scope global cvl_0_0 00:17:06.498 valid_lft forever preferred_lft forever 00:17:06.498 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:06.498 valid_lft forever preferred_lft forever 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:06.498 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:06.498 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:06.498 altname enp175s0f1np1 00:17:06.498 altname ens801f1np1 00:17:06.498 inet 192.168.100.9/24 scope global cvl_0_1 00:17:06.498 valid_lft forever preferred_lft forever 00:17:06.498 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:06.498 valid_lft forever preferred_lft forever 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:06.498 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:06.499 192.168.100.9' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:06.499 192.168.100.9' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:06.499 192.168.100.9' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1336026 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1336026 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # '[' -z 1336026 ']' 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:06.499 08:55:28 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:06.499 [2024-06-09 08:55:28.784462] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:06.499 [2024-06-09 08:55:28.784504] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.499 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.499 [2024-06-09 08:55:28.838949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.499 [2024-06-09 08:55:28.916766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.499 [2024-06-09 08:55:28.916801] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.499 [2024-06-09 08:55:28.916808] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.499 [2024-06-09 08:55:28.916814] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.499 [2024-06-09 08:55:28.916819] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.499 [2024-06-09 08:55:28.916930] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.499 [2024-06-09 08:55:28.917049] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.499 [2024-06-09 08:55:28.917155] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.499 [2024-06-09 08:55:28.917156] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 6 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@863 -- # return 0 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.065 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.324 [2024-06-09 08:55:29.636085] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x16961d0/0x1695810) succeed. 00:17:07.324 [2024-06-09 08:55:29.644914] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1697580/0x1695d90) succeed. 00:17:07.324 [2024-06-09 08:55:29.644941] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.324 Malloc0 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.324 [2024-06-09 08:55:29.691633] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.324 { 00:17:07.324 "params": { 00:17:07.324 "name": "Nvme$subsystem", 00:17:07.324 "trtype": "$TEST_TRANSPORT", 00:17:07.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.324 "adrfam": "ipv4", 00:17:07.324 "trsvcid": "$NVMF_PORT", 00:17:07.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.324 "hdgst": ${hdgst:-false}, 00:17:07.324 "ddgst": ${ddgst:-false} 00:17:07.324 }, 00:17:07.324 "method": "bdev_nvme_attach_controller" 00:17:07.324 } 00:17:07.324 EOF 00:17:07.324 )") 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:07.324 08:55:29 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.324 "params": { 00:17:07.324 "name": "Nvme1", 00:17:07.324 "trtype": "rdma", 00:17:07.324 "traddr": "192.168.100.8", 00:17:07.324 "adrfam": "ipv4", 00:17:07.324 "trsvcid": "4420", 00:17:07.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.324 "hdgst": false, 00:17:07.324 "ddgst": false 00:17:07.324 }, 00:17:07.324 "method": "bdev_nvme_attach_controller" 00:17:07.324 }' 00:17:07.324 [2024-06-09 08:55:29.738410] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:17:07.324 [2024-06-09 08:55:29.738453] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336236 ] 00:17:07.324 EAL: No free 2048 kB hugepages reported on node 1 00:17:07.324 [2024-06-09 08:55:29.792795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.324 [2024-06-09 08:55:29.866741] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.324 [2024-06-09 08:55:29.866847] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.324 [2024-06-09 08:55:29.866849] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.583 I/O targets: 00:17:07.583 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.583 00:17:07.583 00:17:07.583 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.583 http://cunit.sourceforge.net/ 00:17:07.583 00:17:07.583 00:17:07.583 Suite: bdevio tests on: Nvme1n1 00:17:07.583 Test: blockdev write read block ...passed 00:17:07.583 Test: blockdev write zeroes read block ...passed 00:17:07.583 Test: blockdev write zeroes read no split ...passed 00:17:07.583 Test: blockdev write zeroes read split ...passed 00:17:07.583 Test: blockdev write zeroes read split partial ...passed 00:17:07.583 Test: blockdev reset ...[2024-06-09 08:55:30.060215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.583 [2024-06-09 08:55:30.084975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.583 [2024-06-09 08:55:30.112131] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:07.583 passed 00:17:07.583 Test: blockdev write read 8 blocks ...passed 00:17:07.583 Test: blockdev write read size > 128k ...passed 00:17:07.583 Test: blockdev write read invalid size ...passed 00:17:07.583 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:07.583 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:07.583 Test: blockdev write read max offset ...passed 00:17:07.583 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:07.583 Test: blockdev writev readv 8 blocks ...passed 00:17:07.583 Test: blockdev writev readv 30 x 1block ...passed 00:17:07.583 Test: blockdev writev readv block ...passed 00:17:07.583 Test: blockdev writev readv size > 128k ...passed 00:17:07.583 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:07.583 Test: blockdev comparev and writev ...[2024-06-09 08:55:30.115464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.115500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.115681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.115697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.115890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.115908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.115915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.116090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.116098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.116106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:07.583 [2024-06-09 08:55:30.116113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:07.583 passed 00:17:07.583 Test: blockdev nvme passthru rw ...passed 00:17:07.583 Test: blockdev nvme passthru vendor specific ...[2024-06-09 08:55:30.116408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:07.583 [2024-06-09 08:55:30.116417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.116465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:07.583 [2024-06-09 08:55:30.116472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.116530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:07.583 [2024-06-09 08:55:30.116537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:07.583 [2024-06-09 08:55:30.116584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:07.583 [2024-06-09 08:55:30.116592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:07.583 passed 00:17:07.583 Test: blockdev nvme admin passthru ...passed 00:17:07.583 Test: blockdev copy ...passed 00:17:07.583 00:17:07.583 Run Summary: Type Total Ran Passed Failed Inactive 00:17:07.583 suites 1 1 n/a 0 0 00:17:07.583 tests 23 23 23 0 0 00:17:07.583 asserts 152 152 152 0 n/a 00:17:07.583 00:17:07.583 Elapsed time = 0.183 seconds 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:17:07.842 rmmod nvme_rdma 00:17:07.842 rmmod nvme_fabrics 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1336026 ']' 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1336026 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@949 -- # '[' -z 1336026 ']' 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@953 -- # kill -0 1336026 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # uname 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1336026 00:17:07.842 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@955 -- # process_name=reactor_3 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' reactor_3 = sudo ']' 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1336026' 00:17:08.101 killing process with pid 1336026 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@968 -- # kill 1336026 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@973 -- # wait 1336026 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:17:08.101 00:17:08.101 real 0m7.178s 00:17:08.101 user 0m9.455s 00:17:08.101 sys 0m4.342s 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1125 -- # xtrace_disable 00:17:08.101 08:55:30 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.101 ************************************ 00:17:08.101 END TEST nvmf_bdevio 00:17:08.101 ************************************ 00:17:08.360 08:55:30 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:17:08.360 08:55:30 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:17:08.360 08:55:30 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:17:08.360 08:55:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:08.360 ************************************ 00:17:08.360 START TEST nvmf_auth_target 00:17:08.360 ************************************ 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:17:08.360 * Looking for test storage... 00:17:08.360 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.360 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.361 08:55:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:13.631 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:13.631 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@377 -- # modinfo irdma 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:13.631 Found net devices under 0000:af:00.0: cvl_0_0 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:13.631 Found net devices under 0000:af:00.1: cvl_0_1 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.631 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:17:13.632 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:13.632 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:17:13.632 altname enp175s0f0np0 00:17:13.632 altname ens801f0np0 00:17:13.632 inet 192.168.100.8/24 scope global cvl_0_0 00:17:13.632 valid_lft forever preferred_lft forever 00:17:13.632 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:17:13.632 valid_lft forever preferred_lft forever 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:17:13.632 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:17:13.632 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:17:13.632 altname enp175s0f1np1 00:17:13.632 altname ens801f1np1 00:17:13.632 inet 192.168.100.9/24 scope global cvl_0_1 00:17:13.632 valid_lft forever preferred_lft forever 00:17:13.632 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:17:13.632 valid_lft forever preferred_lft forever 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:17:13.632 192.168.100.9' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:17:13.632 192.168.100.9' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:17:13.632 192.168.100.9' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1339434 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1339434 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1339434 ']' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:13.632 08:55:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1339517 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:14.568 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e3bcddcf6693b6e6cf592b0cd4ebe4e6e2d733fc94a146fb 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.67K 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e3bcddcf6693b6e6cf592b0cd4ebe4e6e2d733fc94a146fb 0 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e3bcddcf6693b6e6cf592b0cd4ebe4e6e2d733fc94a146fb 0 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e3bcddcf6693b6e6cf592b0cd4ebe4e6e2d733fc94a146fb 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.67K 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.67K 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.67K 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dfeca54b116ae4f98f5338861f403a15a11af3bc48568ee6fab88e744adfb3bf 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5Se 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dfeca54b116ae4f98f5338861f403a15a11af3bc48568ee6fab88e744adfb3bf 3 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dfeca54b116ae4f98f5338861f403a15a11af3bc48568ee6fab88e744adfb3bf 3 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dfeca54b116ae4f98f5338861f403a15a11af3bc48568ee6fab88e744adfb3bf 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5Se 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5Se 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.5Se 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17222b7a44ba7239dc3ad679895c8c82 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.AjE 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17222b7a44ba7239dc3ad679895c8c82 1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17222b7a44ba7239dc3ad679895c8c82 1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17222b7a44ba7239dc3ad679895c8c82 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.569 08:55:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.AjE 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.AjE 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.AjE 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1e2f40fc3ec35af6e59f56c01caae221b5fb7b2a85407268 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2D6 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1e2f40fc3ec35af6e59f56c01caae221b5fb7b2a85407268 2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1e2f40fc3ec35af6e59f56c01caae221b5fb7b2a85407268 2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1e2f40fc3ec35af6e59f56c01caae221b5fb7b2a85407268 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2D6 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2D6 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.2D6 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=519864e5a65f76656a374048f6aee81ffb1d25012a4fbd5b 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Chs 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 519864e5a65f76656a374048f6aee81ffb1d25012a4fbd5b 2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 519864e5a65f76656a374048f6aee81ffb1d25012a4fbd5b 2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=519864e5a65f76656a374048f6aee81ffb1d25012a4fbd5b 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Chs 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Chs 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Chs 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:14.569 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8db23a4e55ae0c4205fb4eaa87e57933 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jvX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8db23a4e55ae0c4205fb4eaa87e57933 1 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8db23a4e55ae0c4205fb4eaa87e57933 1 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8db23a4e55ae0c4205fb4eaa87e57933 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jvX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jvX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.jvX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3b80865b0c25ec2f9217bc7521b0e3d70690d18156c6fa2da33fa4fcc0b59f6 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.tTG 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3b80865b0c25ec2f9217bc7521b0e3d70690d18156c6fa2da33fa4fcc0b59f6 3 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3b80865b0c25ec2f9217bc7521b0e3d70690d18156c6fa2da33fa4fcc0b59f6 3 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3b80865b0c25ec2f9217bc7521b0e3d70690d18156c6fa2da33fa4fcc0b59f6 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.tTG 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.tTG 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.tTG 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1339434 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1339434 ']' 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:14.829 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1339517 /var/tmp/host.sock 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1339517 ']' 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/host.sock 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:15.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.088 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.67K 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.67K 00:17:15.089 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.67K 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.5Se ]] 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Se 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Se 00:17:15.347 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5Se 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.AjE 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.AjE 00:17:15.606 08:55:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.AjE 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.2D6 ]] 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2D6 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2D6 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2D6 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Chs 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Chs 00:17:15.865 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Chs 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.jvX ]] 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jvX 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jvX 00:17:16.124 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.jvX 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.tTG 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.tTG 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.tTG 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.383 08:55:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.642 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.901 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:16.901 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.160 { 00:17:17.160 "cntlid": 1, 00:17:17.160 "qid": 0, 00:17:17.160 "state": "enabled", 00:17:17.160 "listen_address": { 00:17:17.160 "trtype": "RDMA", 00:17:17.160 "adrfam": "IPv4", 00:17:17.160 "traddr": "192.168.100.8", 00:17:17.160 "trsvcid": "4420" 00:17:17.160 }, 00:17:17.160 "peer_address": { 00:17:17.160 "trtype": "RDMA", 00:17:17.160 "adrfam": "IPv4", 00:17:17.160 "traddr": "192.168.100.8", 00:17:17.160 "trsvcid": "53970" 00:17:17.160 }, 00:17:17.160 "auth": { 00:17:17.160 "state": "completed", 00:17:17.160 "digest": "sha256", 00:17:17.160 "dhgroup": "null" 00:17:17.160 } 00:17:17.160 } 00:17:17.160 ]' 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.160 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.419 08:55:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.986 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.245 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.504 00:17:18.504 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.504 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.504 08:55:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.763 { 00:17:18.763 "cntlid": 3, 00:17:18.763 "qid": 0, 00:17:18.763 "state": "enabled", 00:17:18.763 "listen_address": { 00:17:18.763 "trtype": "RDMA", 00:17:18.763 "adrfam": "IPv4", 00:17:18.763 "traddr": "192.168.100.8", 00:17:18.763 "trsvcid": "4420" 00:17:18.763 }, 00:17:18.763 "peer_address": { 00:17:18.763 "trtype": "RDMA", 00:17:18.763 "adrfam": "IPv4", 00:17:18.763 "traddr": "192.168.100.8", 00:17:18.763 "trsvcid": "51199" 00:17:18.763 }, 00:17:18.763 "auth": { 00:17:18.763 "state": "completed", 00:17:18.763 "digest": "sha256", 00:17:18.763 "dhgroup": "null" 00:17:18.763 } 00:17:18.763 } 00:17:18.763 ]' 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.763 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.021 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:19.589 08:55:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.589 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.847 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.105 00:17:20.105 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.105 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.105 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.362 { 00:17:20.362 "cntlid": 5, 00:17:20.362 "qid": 0, 00:17:20.362 "state": "enabled", 00:17:20.362 "listen_address": { 00:17:20.362 "trtype": "RDMA", 00:17:20.362 "adrfam": "IPv4", 00:17:20.362 "traddr": "192.168.100.8", 00:17:20.362 "trsvcid": "4420" 00:17:20.362 }, 00:17:20.362 "peer_address": { 00:17:20.362 "trtype": "RDMA", 00:17:20.362 "adrfam": "IPv4", 00:17:20.362 "traddr": "192.168.100.8", 00:17:20.362 "trsvcid": "38804" 00:17:20.362 }, 00:17:20.362 "auth": { 00:17:20.362 "state": "completed", 00:17:20.362 "digest": "sha256", 00:17:20.362 "dhgroup": "null" 00:17:20.362 } 00:17:20.362 } 00:17:20.362 ]' 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.362 08:55:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.619 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:21.185 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.444 08:55:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.745 00:17:21.745 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.745 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.745 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.008 { 00:17:22.008 "cntlid": 7, 00:17:22.008 "qid": 0, 00:17:22.008 "state": "enabled", 00:17:22.008 "listen_address": { 00:17:22.008 "trtype": "RDMA", 00:17:22.008 "adrfam": "IPv4", 00:17:22.008 "traddr": "192.168.100.8", 00:17:22.008 "trsvcid": "4420" 00:17:22.008 }, 00:17:22.008 "peer_address": { 00:17:22.008 "trtype": "RDMA", 00:17:22.008 "adrfam": "IPv4", 00:17:22.008 "traddr": "192.168.100.8", 00:17:22.008 "trsvcid": "36628" 00:17:22.008 }, 00:17:22.008 "auth": { 00:17:22.008 "state": "completed", 00:17:22.008 "digest": "sha256", 00:17:22.008 "dhgroup": "null" 00:17:22.008 } 00:17:22.008 } 00:17:22.008 ]' 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.008 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:22.009 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.009 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.009 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.009 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.267 08:55:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:22.835 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.835 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:22.835 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:22.835 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.097 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.355 00:17:23.355 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.355 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.355 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.613 { 00:17:23.613 "cntlid": 9, 00:17:23.613 "qid": 0, 00:17:23.613 "state": "enabled", 00:17:23.613 "listen_address": { 00:17:23.613 "trtype": "RDMA", 00:17:23.613 "adrfam": "IPv4", 00:17:23.613 "traddr": "192.168.100.8", 00:17:23.613 "trsvcid": "4420" 00:17:23.613 }, 00:17:23.613 "peer_address": { 00:17:23.613 "trtype": "RDMA", 00:17:23.613 "adrfam": "IPv4", 00:17:23.613 "traddr": "192.168.100.8", 00:17:23.613 "trsvcid": "51329" 00:17:23.613 }, 00:17:23.613 "auth": { 00:17:23.613 "state": "completed", 00:17:23.613 "digest": "sha256", 00:17:23.613 "dhgroup": "ffdhe2048" 00:17:23.613 } 00:17:23.613 } 00:17:23.613 ]' 00:17:23.613 08:55:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.613 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.871 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:24.437 08:55:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.696 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.954 00:17:24.954 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.954 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.954 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.212 { 00:17:25.212 "cntlid": 11, 00:17:25.212 "qid": 0, 00:17:25.212 "state": "enabled", 00:17:25.212 "listen_address": { 00:17:25.212 "trtype": "RDMA", 00:17:25.212 "adrfam": "IPv4", 00:17:25.212 "traddr": "192.168.100.8", 00:17:25.212 "trsvcid": "4420" 00:17:25.212 }, 00:17:25.212 "peer_address": { 00:17:25.212 "trtype": "RDMA", 00:17:25.212 "adrfam": "IPv4", 00:17:25.212 "traddr": "192.168.100.8", 00:17:25.212 "trsvcid": "42482" 00:17:25.212 }, 00:17:25.212 "auth": { 00:17:25.212 "state": "completed", 00:17:25.212 "digest": "sha256", 00:17:25.212 "dhgroup": "ffdhe2048" 00:17:25.212 } 00:17:25.212 } 00:17:25.212 ]' 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.212 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.470 08:55:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:26.036 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.294 08:55:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.553 00:17:26.553 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.553 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.553 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.812 { 00:17:26.812 "cntlid": 13, 00:17:26.812 "qid": 0, 00:17:26.812 "state": "enabled", 00:17:26.812 "listen_address": { 00:17:26.812 "trtype": "RDMA", 00:17:26.812 "adrfam": "IPv4", 00:17:26.812 "traddr": "192.168.100.8", 00:17:26.812 "trsvcid": "4420" 00:17:26.812 }, 00:17:26.812 "peer_address": { 00:17:26.812 "trtype": "RDMA", 00:17:26.812 "adrfam": "IPv4", 00:17:26.812 "traddr": "192.168.100.8", 00:17:26.812 "trsvcid": "48245" 00:17:26.812 }, 00:17:26.812 "auth": { 00:17:26.812 "state": "completed", 00:17:26.812 "digest": "sha256", 00:17:26.812 "dhgroup": "ffdhe2048" 00:17:26.812 } 00:17:26.812 } 00:17:26.812 ]' 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.812 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.070 08:55:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:27.635 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.894 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:28.152 00:17:28.152 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.152 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.152 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.411 { 00:17:28.411 "cntlid": 15, 00:17:28.411 "qid": 0, 00:17:28.411 "state": "enabled", 00:17:28.411 "listen_address": { 00:17:28.411 "trtype": "RDMA", 00:17:28.411 "adrfam": "IPv4", 00:17:28.411 "traddr": "192.168.100.8", 00:17:28.411 "trsvcid": "4420" 00:17:28.411 }, 00:17:28.411 "peer_address": { 00:17:28.411 "trtype": "RDMA", 00:17:28.411 "adrfam": "IPv4", 00:17:28.411 "traddr": "192.168.100.8", 00:17:28.411 "trsvcid": "52072" 00:17:28.411 }, 00:17:28.411 "auth": { 00:17:28.411 "state": "completed", 00:17:28.411 "digest": "sha256", 00:17:28.411 "dhgroup": "ffdhe2048" 00:17:28.411 } 00:17:28.411 } 00:17:28.411 ]' 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.411 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.670 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.670 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.670 08:55:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.670 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:29.237 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.496 08:55:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.755 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.013 00:17:30.013 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:30.014 { 00:17:30.014 "cntlid": 17, 00:17:30.014 "qid": 0, 00:17:30.014 "state": "enabled", 00:17:30.014 "listen_address": { 00:17:30.014 "trtype": "RDMA", 00:17:30.014 "adrfam": "IPv4", 00:17:30.014 "traddr": "192.168.100.8", 00:17:30.014 "trsvcid": "4420" 00:17:30.014 }, 00:17:30.014 "peer_address": { 00:17:30.014 "trtype": "RDMA", 00:17:30.014 "adrfam": "IPv4", 00:17:30.014 "traddr": "192.168.100.8", 00:17:30.014 "trsvcid": "54721" 00:17:30.014 }, 00:17:30.014 "auth": { 00:17:30.014 "state": "completed", 00:17:30.014 "digest": "sha256", 00:17:30.014 "dhgroup": "ffdhe3072" 00:17:30.014 } 00:17:30.014 } 00:17:30.014 ]' 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.014 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.272 08:55:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:30.839 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.096 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.354 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.355 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.613 00:17:31.613 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.613 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.613 08:55:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.613 { 00:17:31.613 "cntlid": 19, 00:17:31.613 "qid": 0, 00:17:31.613 "state": "enabled", 00:17:31.613 "listen_address": { 00:17:31.613 "trtype": "RDMA", 00:17:31.613 "adrfam": "IPv4", 00:17:31.613 "traddr": "192.168.100.8", 00:17:31.613 "trsvcid": "4420" 00:17:31.613 }, 00:17:31.613 "peer_address": { 00:17:31.613 "trtype": "RDMA", 00:17:31.613 "adrfam": "IPv4", 00:17:31.613 "traddr": "192.168.100.8", 00:17:31.613 "trsvcid": "56839" 00:17:31.613 }, 00:17:31.613 "auth": { 00:17:31.613 "state": "completed", 00:17:31.613 "digest": "sha256", 00:17:31.613 "dhgroup": "ffdhe3072" 00:17:31.613 } 00:17:31.613 } 00:17:31.613 ]' 00:17:31.613 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.872 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.131 08:55:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.699 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.958 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:33.217 00:17:33.217 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.217 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.217 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:33.476 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.476 { 00:17:33.476 "cntlid": 21, 00:17:33.476 "qid": 0, 00:17:33.476 "state": "enabled", 00:17:33.476 "listen_address": { 00:17:33.477 "trtype": "RDMA", 00:17:33.477 "adrfam": "IPv4", 00:17:33.477 "traddr": "192.168.100.8", 00:17:33.477 "trsvcid": "4420" 00:17:33.477 }, 00:17:33.477 "peer_address": { 00:17:33.477 "trtype": "RDMA", 00:17:33.477 "adrfam": "IPv4", 00:17:33.477 "traddr": "192.168.100.8", 00:17:33.477 "trsvcid": "50767" 00:17:33.477 }, 00:17:33.477 "auth": { 00:17:33.477 "state": "completed", 00:17:33.477 "digest": "sha256", 00:17:33.477 "dhgroup": "ffdhe3072" 00:17:33.477 } 00:17:33.477 } 00:17:33.477 ]' 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.477 08:55:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.735 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.302 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:34.561 08:55:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.561 08:55:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:34.561 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.561 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.820 00:17:34.820 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.820 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.820 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.079 { 00:17:35.079 "cntlid": 23, 00:17:35.079 "qid": 0, 00:17:35.079 "state": "enabled", 00:17:35.079 "listen_address": { 00:17:35.079 "trtype": "RDMA", 00:17:35.079 "adrfam": "IPv4", 00:17:35.079 "traddr": "192.168.100.8", 00:17:35.079 "trsvcid": "4420" 00:17:35.079 }, 00:17:35.079 "peer_address": { 00:17:35.079 "trtype": "RDMA", 00:17:35.079 "adrfam": "IPv4", 00:17:35.079 "traddr": "192.168.100.8", 00:17:35.079 "trsvcid": "48488" 00:17:35.079 }, 00:17:35.079 "auth": { 00:17:35.079 "state": "completed", 00:17:35.079 "digest": "sha256", 00:17:35.079 "dhgroup": "ffdhe3072" 00:17:35.079 } 00:17:35.079 } 00:17:35.079 ]' 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.079 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.338 08:55:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.905 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.164 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.423 00:17:36.423 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.423 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.423 08:55:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.681 { 00:17:36.681 "cntlid": 25, 00:17:36.681 "qid": 0, 00:17:36.681 "state": "enabled", 00:17:36.681 "listen_address": { 00:17:36.681 "trtype": "RDMA", 00:17:36.681 "adrfam": "IPv4", 00:17:36.681 "traddr": "192.168.100.8", 00:17:36.681 "trsvcid": "4420" 00:17:36.681 }, 00:17:36.681 "peer_address": { 00:17:36.681 "trtype": "RDMA", 00:17:36.681 "adrfam": "IPv4", 00:17:36.681 "traddr": "192.168.100.8", 00:17:36.681 "trsvcid": "57662" 00:17:36.681 }, 00:17:36.681 "auth": { 00:17:36.681 "state": "completed", 00:17:36.681 "digest": "sha256", 00:17:36.681 "dhgroup": "ffdhe4096" 00:17:36.681 } 00:17:36.681 } 00:17:36.681 ]' 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.681 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.940 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:37.508 08:55:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.767 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.026 00:17:38.026 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.026 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:38.026 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:38.296 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.296 { 00:17:38.296 "cntlid": 27, 00:17:38.296 "qid": 0, 00:17:38.296 "state": "enabled", 00:17:38.296 "listen_address": { 00:17:38.296 "trtype": "RDMA", 00:17:38.296 "adrfam": "IPv4", 00:17:38.297 "traddr": "192.168.100.8", 00:17:38.297 "trsvcid": "4420" 00:17:38.297 }, 00:17:38.297 "peer_address": { 00:17:38.297 "trtype": "RDMA", 00:17:38.297 "adrfam": "IPv4", 00:17:38.297 "traddr": "192.168.100.8", 00:17:38.297 "trsvcid": "44875" 00:17:38.297 }, 00:17:38.297 "auth": { 00:17:38.297 "state": "completed", 00:17:38.297 "digest": "sha256", 00:17:38.297 "dhgroup": "ffdhe4096" 00:17:38.297 } 00:17:38.297 } 00:17:38.297 ]' 00:17:38.297 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.297 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.297 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.297 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.297 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.557 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.557 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.557 08:56:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.557 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:39.123 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.381 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.639 08:56:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.897 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.897 { 00:17:39.897 "cntlid": 29, 00:17:39.897 "qid": 0, 00:17:39.897 "state": "enabled", 00:17:39.897 "listen_address": { 00:17:39.897 "trtype": "RDMA", 00:17:39.897 "adrfam": "IPv4", 00:17:39.897 "traddr": "192.168.100.8", 00:17:39.897 "trsvcid": "4420" 00:17:39.897 }, 00:17:39.897 "peer_address": { 00:17:39.897 "trtype": "RDMA", 00:17:39.897 "adrfam": "IPv4", 00:17:39.897 "traddr": "192.168.100.8", 00:17:39.897 "trsvcid": "53726" 00:17:39.897 }, 00:17:39.897 "auth": { 00:17:39.897 "state": "completed", 00:17:39.897 "digest": "sha256", 00:17:39.897 "dhgroup": "ffdhe4096" 00:17:39.897 } 00:17:39.897 } 00:17:39.897 ]' 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.897 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.155 08:56:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:40.721 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.979 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.237 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:41.495 00:17:41.495 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.495 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.495 08:56:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.754 { 00:17:41.754 "cntlid": 31, 00:17:41.754 "qid": 0, 00:17:41.754 "state": "enabled", 00:17:41.754 "listen_address": { 00:17:41.754 "trtype": "RDMA", 00:17:41.754 "adrfam": "IPv4", 00:17:41.754 "traddr": "192.168.100.8", 00:17:41.754 "trsvcid": "4420" 00:17:41.754 }, 00:17:41.754 "peer_address": { 00:17:41.754 "trtype": "RDMA", 00:17:41.754 "adrfam": "IPv4", 00:17:41.754 "traddr": "192.168.100.8", 00:17:41.754 "trsvcid": "40997" 00:17:41.754 }, 00:17:41.754 "auth": { 00:17:41.754 "state": "completed", 00:17:41.754 "digest": "sha256", 00:17:41.754 "dhgroup": "ffdhe4096" 00:17:41.754 } 00:17:41.754 } 00:17:41.754 ]' 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.754 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.013 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:42.580 08:56:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.580 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.581 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.840 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.099 00:17:43.099 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.099 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.099 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.358 { 00:17:43.358 "cntlid": 33, 00:17:43.358 "qid": 0, 00:17:43.358 "state": "enabled", 00:17:43.358 "listen_address": { 00:17:43.358 "trtype": "RDMA", 00:17:43.358 "adrfam": "IPv4", 00:17:43.358 "traddr": "192.168.100.8", 00:17:43.358 "trsvcid": "4420" 00:17:43.358 }, 00:17:43.358 "peer_address": { 00:17:43.358 "trtype": "RDMA", 00:17:43.358 "adrfam": "IPv4", 00:17:43.358 "traddr": "192.168.100.8", 00:17:43.358 "trsvcid": "39374" 00:17:43.358 }, 00:17:43.358 "auth": { 00:17:43.358 "state": "completed", 00:17:43.358 "digest": "sha256", 00:17:43.358 "dhgroup": "ffdhe6144" 00:17:43.358 } 00:17:43.358 } 00:17:43.358 ]' 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.358 08:56:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.617 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:44.195 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.452 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.452 08:56:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.710 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.968 { 00:17:44.968 "cntlid": 35, 00:17:44.968 "qid": 0, 00:17:44.968 "state": "enabled", 00:17:44.968 "listen_address": { 00:17:44.968 "trtype": "RDMA", 00:17:44.968 "adrfam": "IPv4", 00:17:44.968 "traddr": "192.168.100.8", 00:17:44.968 "trsvcid": "4420" 00:17:44.968 }, 00:17:44.968 "peer_address": { 00:17:44.968 "trtype": "RDMA", 00:17:44.968 "adrfam": "IPv4", 00:17:44.968 "traddr": "192.168.100.8", 00:17:44.968 "trsvcid": "58601" 00:17:44.968 }, 00:17:44.968 "auth": { 00:17:44.968 "state": "completed", 00:17:44.968 "digest": "sha256", 00:17:44.968 "dhgroup": "ffdhe6144" 00:17:44.968 } 00:17:44.968 } 00:17:44.968 ]' 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.968 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.227 08:56:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:45.823 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.081 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.340 08:56:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.598 00:17:46.598 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.598 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.598 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.857 { 00:17:46.857 "cntlid": 37, 00:17:46.857 "qid": 0, 00:17:46.857 "state": "enabled", 00:17:46.857 "listen_address": { 00:17:46.857 "trtype": "RDMA", 00:17:46.857 "adrfam": "IPv4", 00:17:46.857 "traddr": "192.168.100.8", 00:17:46.857 "trsvcid": "4420" 00:17:46.857 }, 00:17:46.857 "peer_address": { 00:17:46.857 "trtype": "RDMA", 00:17:46.857 "adrfam": "IPv4", 00:17:46.857 "traddr": "192.168.100.8", 00:17:46.857 "trsvcid": "43615" 00:17:46.857 }, 00:17:46.857 "auth": { 00:17:46.857 "state": "completed", 00:17:46.857 "digest": "sha256", 00:17:46.857 "dhgroup": "ffdhe6144" 00:17:46.857 } 00:17:46.857 } 00:17:46.857 ]' 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.857 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.115 08:56:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:47.681 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:47.940 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.198 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.456 { 00:17:48.456 "cntlid": 39, 00:17:48.456 "qid": 0, 00:17:48.456 "state": "enabled", 00:17:48.456 "listen_address": { 00:17:48.456 "trtype": "RDMA", 00:17:48.456 "adrfam": "IPv4", 00:17:48.456 "traddr": "192.168.100.8", 00:17:48.456 "trsvcid": "4420" 00:17:48.456 }, 00:17:48.456 "peer_address": { 00:17:48.456 "trtype": "RDMA", 00:17:48.456 "adrfam": "IPv4", 00:17:48.456 "traddr": "192.168.100.8", 00:17:48.456 "trsvcid": "38247" 00:17:48.456 }, 00:17:48.456 "auth": { 00:17:48.456 "state": "completed", 00:17:48.456 "digest": "sha256", 00:17:48.456 "dhgroup": "ffdhe6144" 00:17:48.456 } 00:17:48.456 } 00:17:48.456 ]' 00:17:48.456 08:56:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.456 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.456 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.714 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:49.280 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.538 08:56:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.797 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.055 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.313 { 00:17:50.313 "cntlid": 41, 00:17:50.313 "qid": 0, 00:17:50.313 "state": "enabled", 00:17:50.313 "listen_address": { 00:17:50.313 "trtype": "RDMA", 00:17:50.313 "adrfam": "IPv4", 00:17:50.313 "traddr": "192.168.100.8", 00:17:50.313 "trsvcid": "4420" 00:17:50.313 }, 00:17:50.313 "peer_address": { 00:17:50.313 "trtype": "RDMA", 00:17:50.313 "adrfam": "IPv4", 00:17:50.313 "traddr": "192.168.100.8", 00:17:50.313 "trsvcid": "56079" 00:17:50.313 }, 00:17:50.313 "auth": { 00:17:50.313 "state": "completed", 00:17:50.313 "digest": "sha256", 00:17:50.313 "dhgroup": "ffdhe8192" 00:17:50.313 } 00:17:50.313 } 00:17:50.313 ]' 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.313 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.571 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.571 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.571 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.571 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.571 08:56:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.571 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:51.135 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.392 08:56:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:51.649 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.213 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.213 { 00:17:52.213 "cntlid": 43, 00:17:52.213 "qid": 0, 00:17:52.213 "state": "enabled", 00:17:52.213 "listen_address": { 00:17:52.213 "trtype": "RDMA", 00:17:52.213 "adrfam": "IPv4", 00:17:52.213 "traddr": "192.168.100.8", 00:17:52.213 "trsvcid": "4420" 00:17:52.213 }, 00:17:52.213 "peer_address": { 00:17:52.213 "trtype": "RDMA", 00:17:52.213 "adrfam": "IPv4", 00:17:52.213 "traddr": "192.168.100.8", 00:17:52.213 "trsvcid": "46384" 00:17:52.213 }, 00:17:52.213 "auth": { 00:17:52.213 "state": "completed", 00:17:52.213 "digest": "sha256", 00:17:52.213 "dhgroup": "ffdhe8192" 00:17:52.213 } 00:17:52.213 } 00:17:52.213 ]' 00:17:52.213 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.214 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.214 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.472 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.472 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.472 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.472 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.472 08:56:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.472 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:17:53.039 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.298 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.557 08:56:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.816 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.075 { 00:17:54.075 "cntlid": 45, 00:17:54.075 "qid": 0, 00:17:54.075 "state": "enabled", 00:17:54.075 "listen_address": { 00:17:54.075 "trtype": "RDMA", 00:17:54.075 "adrfam": "IPv4", 00:17:54.075 "traddr": "192.168.100.8", 00:17:54.075 "trsvcid": "4420" 00:17:54.075 }, 00:17:54.075 "peer_address": { 00:17:54.075 "trtype": "RDMA", 00:17:54.075 "adrfam": "IPv4", 00:17:54.075 "traddr": "192.168.100.8", 00:17:54.075 "trsvcid": "42148" 00:17:54.075 }, 00:17:54.075 "auth": { 00:17:54.075 "state": "completed", 00:17:54.075 "digest": "sha256", 00:17:54.075 "dhgroup": "ffdhe8192" 00:17:54.075 } 00:17:54.075 } 00:17:54.075 ]' 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:54.075 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.333 08:56:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:17:54.901 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.159 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:55.159 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.159 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.159 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.160 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.160 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.160 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.418 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:55.418 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.418 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.418 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:55.418 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.419 08:56:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:55.986 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.986 { 00:17:55.986 "cntlid": 47, 00:17:55.986 "qid": 0, 00:17:55.986 "state": "enabled", 00:17:55.986 "listen_address": { 00:17:55.986 "trtype": "RDMA", 00:17:55.986 "adrfam": "IPv4", 00:17:55.986 "traddr": "192.168.100.8", 00:17:55.986 "trsvcid": "4420" 00:17:55.986 }, 00:17:55.986 "peer_address": { 00:17:55.986 "trtype": "RDMA", 00:17:55.986 "adrfam": "IPv4", 00:17:55.986 "traddr": "192.168.100.8", 00:17:55.986 "trsvcid": "45393" 00:17:55.986 }, 00:17:55.986 "auth": { 00:17:55.986 "state": "completed", 00:17:55.986 "digest": "sha256", 00:17:55.986 "dhgroup": "ffdhe8192" 00:17:55.986 } 00:17:55.986 } 00:17:55.986 ]' 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:55.986 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.245 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.245 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.245 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.245 08:56:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:17:56.813 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.072 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:57.331 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.331 08:56:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.589 { 00:17:57.589 "cntlid": 49, 00:17:57.589 "qid": 0, 00:17:57.589 "state": "enabled", 00:17:57.589 "listen_address": { 00:17:57.589 "trtype": "RDMA", 00:17:57.589 "adrfam": "IPv4", 00:17:57.589 "traddr": "192.168.100.8", 00:17:57.589 "trsvcid": "4420" 00:17:57.589 }, 00:17:57.589 "peer_address": { 00:17:57.589 "trtype": "RDMA", 00:17:57.589 "adrfam": "IPv4", 00:17:57.589 "traddr": "192.168.100.8", 00:17:57.589 "trsvcid": "45286" 00:17:57.589 }, 00:17:57.589 "auth": { 00:17:57.589 "state": "completed", 00:17:57.589 "digest": "sha384", 00:17:57.589 "dhgroup": "null" 00:17:57.589 } 00:17:57.589 } 00:17:57.589 ]' 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:57.589 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.848 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.848 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.848 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.848 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:17:58.414 08:56:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.673 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.932 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.932 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.191 { 00:17:59.191 "cntlid": 51, 00:17:59.191 "qid": 0, 00:17:59.191 "state": "enabled", 00:17:59.191 "listen_address": { 00:17:59.191 "trtype": "RDMA", 00:17:59.191 "adrfam": "IPv4", 00:17:59.191 "traddr": "192.168.100.8", 00:17:59.191 "trsvcid": "4420" 00:17:59.191 }, 00:17:59.191 "peer_address": { 00:17:59.191 "trtype": "RDMA", 00:17:59.191 "adrfam": "IPv4", 00:17:59.191 "traddr": "192.168.100.8", 00:17:59.191 "trsvcid": "54483" 00:17:59.191 }, 00:17:59.191 "auth": { 00:17:59.191 "state": "completed", 00:17:59.191 "digest": "sha384", 00:17:59.191 "dhgroup": "null" 00:17:59.191 } 00:17:59.191 } 00:17:59.191 ]' 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.191 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.449 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:59.449 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.449 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.449 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.449 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.450 08:56:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:00.017 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.275 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.533 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.534 08:56:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.791 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:00.791 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.791 { 00:18:00.791 "cntlid": 53, 00:18:00.791 "qid": 0, 00:18:00.791 "state": "enabled", 00:18:00.791 "listen_address": { 00:18:00.791 "trtype": "RDMA", 00:18:00.791 "adrfam": "IPv4", 00:18:00.791 "traddr": "192.168.100.8", 00:18:00.791 "trsvcid": "4420" 00:18:00.791 }, 00:18:00.791 "peer_address": { 00:18:00.791 "trtype": "RDMA", 00:18:00.791 "adrfam": "IPv4", 00:18:00.791 "traddr": "192.168.100.8", 00:18:00.791 "trsvcid": "36295" 00:18:00.791 }, 00:18:00.791 "auth": { 00:18:00.791 "state": "completed", 00:18:00.791 "digest": "sha384", 00:18:00.791 "dhgroup": "null" 00:18:00.791 } 00:18:00.791 } 00:18:00.791 ]' 00:18:00.792 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.049 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.306 08:56:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:01.871 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.130 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.389 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:02.389 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.389 { 00:18:02.389 "cntlid": 55, 00:18:02.390 "qid": 0, 00:18:02.390 "state": "enabled", 00:18:02.390 "listen_address": { 00:18:02.390 "trtype": "RDMA", 00:18:02.390 "adrfam": "IPv4", 00:18:02.390 "traddr": "192.168.100.8", 00:18:02.390 "trsvcid": "4420" 00:18:02.390 }, 00:18:02.390 "peer_address": { 00:18:02.390 "trtype": "RDMA", 00:18:02.390 "adrfam": "IPv4", 00:18:02.390 "traddr": "192.168.100.8", 00:18:02.390 "trsvcid": "34383" 00:18:02.390 }, 00:18:02.390 "auth": { 00:18:02.390 "state": "completed", 00:18:02.390 "digest": "sha384", 00:18:02.390 "dhgroup": "null" 00:18:02.390 } 00:18:02.390 } 00:18:02.390 ]' 00:18:02.390 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.648 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.648 08:56:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.648 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.648 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.648 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.648 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.649 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.908 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.476 08:56:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.735 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.994 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.994 { 00:18:03.994 "cntlid": 57, 00:18:03.994 "qid": 0, 00:18:03.994 "state": "enabled", 00:18:03.994 "listen_address": { 00:18:03.994 "trtype": "RDMA", 00:18:03.994 "adrfam": "IPv4", 00:18:03.994 "traddr": "192.168.100.8", 00:18:03.994 "trsvcid": "4420" 00:18:03.994 }, 00:18:03.994 "peer_address": { 00:18:03.994 "trtype": "RDMA", 00:18:03.994 "adrfam": "IPv4", 00:18:03.994 "traddr": "192.168.100.8", 00:18:03.994 "trsvcid": "39380" 00:18:03.994 }, 00:18:03.994 "auth": { 00:18:03.994 "state": "completed", 00:18:03.994 "digest": "sha384", 00:18:03.994 "dhgroup": "ffdhe2048" 00:18:03.994 } 00:18:03.994 } 00:18:03.994 ]' 00:18:03.994 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.252 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.510 08:56:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.078 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.336 08:56:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.337 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.337 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.595 00:18:05.595 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.595 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.595 08:56:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.595 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.595 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.595 08:56:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:05.595 08:56:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.853 { 00:18:05.853 "cntlid": 59, 00:18:05.853 "qid": 0, 00:18:05.853 "state": "enabled", 00:18:05.853 "listen_address": { 00:18:05.853 "trtype": "RDMA", 00:18:05.853 "adrfam": "IPv4", 00:18:05.853 "traddr": "192.168.100.8", 00:18:05.853 "trsvcid": "4420" 00:18:05.853 }, 00:18:05.853 "peer_address": { 00:18:05.853 "trtype": "RDMA", 00:18:05.853 "adrfam": "IPv4", 00:18:05.853 "traddr": "192.168.100.8", 00:18:05.853 "trsvcid": "36249" 00:18:05.853 }, 00:18:05.853 "auth": { 00:18:05.853 "state": "completed", 00:18:05.853 "digest": "sha384", 00:18:05.853 "dhgroup": "ffdhe2048" 00:18:05.853 } 00:18:05.853 } 00:18:05.853 ]' 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.853 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.111 08:56:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.678 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.941 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.200 00:18:07.200 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.200 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.200 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.458 { 00:18:07.458 "cntlid": 61, 00:18:07.458 "qid": 0, 00:18:07.458 "state": "enabled", 00:18:07.458 "listen_address": { 00:18:07.458 "trtype": "RDMA", 00:18:07.458 "adrfam": "IPv4", 00:18:07.458 "traddr": "192.168.100.8", 00:18:07.458 "trsvcid": "4420" 00:18:07.458 }, 00:18:07.458 "peer_address": { 00:18:07.458 "trtype": "RDMA", 00:18:07.458 "adrfam": "IPv4", 00:18:07.458 "traddr": "192.168.100.8", 00:18:07.458 "trsvcid": "51469" 00:18:07.458 }, 00:18:07.458 "auth": { 00:18:07.458 "state": "completed", 00:18:07.458 "digest": "sha384", 00:18:07.458 "dhgroup": "ffdhe2048" 00:18:07.458 } 00:18:07.458 } 00:18:07.458 ]' 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.458 08:56:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.716 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.283 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.541 08:56:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.799 00:18:08.799 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.799 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.799 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.060 { 00:18:09.060 "cntlid": 63, 00:18:09.060 "qid": 0, 00:18:09.060 "state": "enabled", 00:18:09.060 "listen_address": { 00:18:09.060 "trtype": "RDMA", 00:18:09.060 "adrfam": "IPv4", 00:18:09.060 "traddr": "192.168.100.8", 00:18:09.060 "trsvcid": "4420" 00:18:09.060 }, 00:18:09.060 "peer_address": { 00:18:09.060 "trtype": "RDMA", 00:18:09.060 "adrfam": "IPv4", 00:18:09.060 "traddr": "192.168.100.8", 00:18:09.060 "trsvcid": "35028" 00:18:09.060 }, 00:18:09.060 "auth": { 00:18:09.060 "state": "completed", 00:18:09.060 "digest": "sha384", 00:18:09.060 "dhgroup": "ffdhe2048" 00:18:09.060 } 00:18:09.060 } 00:18:09.060 ]' 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.060 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.061 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.061 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.343 08:56:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:09.924 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.183 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.442 00:18:10.442 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.442 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.442 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.442 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.709 08:56:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.709 { 00:18:10.709 "cntlid": 65, 00:18:10.709 "qid": 0, 00:18:10.709 "state": "enabled", 00:18:10.709 "listen_address": { 00:18:10.709 "trtype": "RDMA", 00:18:10.709 "adrfam": "IPv4", 00:18:10.709 "traddr": "192.168.100.8", 00:18:10.709 "trsvcid": "4420" 00:18:10.709 }, 00:18:10.709 "peer_address": { 00:18:10.709 "trtype": "RDMA", 00:18:10.709 "adrfam": "IPv4", 00:18:10.709 "traddr": "192.168.100.8", 00:18:10.709 "trsvcid": "53573" 00:18:10.709 }, 00:18:10.709 "auth": { 00:18:10.709 "state": "completed", 00:18:10.709 "digest": "sha384", 00:18:10.709 "dhgroup": "ffdhe3072" 00:18:10.709 } 00:18:10.709 } 00:18:10.709 ]' 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.709 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.968 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:11.535 08:56:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.535 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:11.793 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.051 00:18:12.051 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.051 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.051 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.309 { 00:18:12.309 "cntlid": 67, 00:18:12.309 "qid": 0, 00:18:12.309 "state": "enabled", 00:18:12.309 "listen_address": { 00:18:12.309 "trtype": "RDMA", 00:18:12.309 "adrfam": "IPv4", 00:18:12.309 "traddr": "192.168.100.8", 00:18:12.309 "trsvcid": "4420" 00:18:12.309 }, 00:18:12.309 "peer_address": { 00:18:12.309 "trtype": "RDMA", 00:18:12.309 "adrfam": "IPv4", 00:18:12.309 "traddr": "192.168.100.8", 00:18:12.309 "trsvcid": "33766" 00:18:12.309 }, 00:18:12.309 "auth": { 00:18:12.309 "state": "completed", 00:18:12.309 "digest": "sha384", 00:18:12.309 "dhgroup": "ffdhe3072" 00:18:12.309 } 00:18:12.309 } 00:18:12.309 ]' 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.309 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.310 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.568 08:56:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.134 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.392 08:56:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.650 00:18:13.650 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.650 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.650 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:13.908 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.908 { 00:18:13.908 "cntlid": 69, 00:18:13.908 "qid": 0, 00:18:13.908 "state": "enabled", 00:18:13.908 "listen_address": { 00:18:13.908 "trtype": "RDMA", 00:18:13.908 "adrfam": "IPv4", 00:18:13.908 "traddr": "192.168.100.8", 00:18:13.908 "trsvcid": "4420" 00:18:13.908 }, 00:18:13.908 "peer_address": { 00:18:13.908 "trtype": "RDMA", 00:18:13.908 "adrfam": "IPv4", 00:18:13.908 "traddr": "192.168.100.8", 00:18:13.908 "trsvcid": "41593" 00:18:13.908 }, 00:18:13.908 "auth": { 00:18:13.908 "state": "completed", 00:18:13.908 "digest": "sha384", 00:18:13.908 "dhgroup": "ffdhe3072" 00:18:13.908 } 00:18:13.908 } 00:18:13.908 ]' 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.909 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.167 08:56:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:14.733 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.021 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.280 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:15.280 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.538 { 00:18:15.538 "cntlid": 71, 00:18:15.538 "qid": 0, 00:18:15.538 "state": "enabled", 00:18:15.538 "listen_address": { 00:18:15.538 "trtype": "RDMA", 00:18:15.538 "adrfam": "IPv4", 00:18:15.538 "traddr": "192.168.100.8", 00:18:15.538 "trsvcid": "4420" 00:18:15.538 }, 00:18:15.538 "peer_address": { 00:18:15.538 "trtype": "RDMA", 00:18:15.538 "adrfam": "IPv4", 00:18:15.538 "traddr": "192.168.100.8", 00:18:15.538 "trsvcid": "53746" 00:18:15.538 }, 00:18:15.538 "auth": { 00:18:15.538 "state": "completed", 00:18:15.538 "digest": "sha384", 00:18:15.538 "dhgroup": "ffdhe3072" 00:18:15.538 } 00:18:15.538 } 00:18:15.538 ]' 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.538 08:56:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.797 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.364 08:56:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.624 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:16.883 00:18:16.883 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.883 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.883 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.142 { 00:18:17.142 "cntlid": 73, 00:18:17.142 "qid": 0, 00:18:17.142 "state": "enabled", 00:18:17.142 "listen_address": { 00:18:17.142 "trtype": "RDMA", 00:18:17.142 "adrfam": "IPv4", 00:18:17.142 "traddr": "192.168.100.8", 00:18:17.142 "trsvcid": "4420" 00:18:17.142 }, 00:18:17.142 "peer_address": { 00:18:17.142 "trtype": "RDMA", 00:18:17.142 "adrfam": "IPv4", 00:18:17.142 "traddr": "192.168.100.8", 00:18:17.142 "trsvcid": "54190" 00:18:17.142 }, 00:18:17.142 "auth": { 00:18:17.142 "state": "completed", 00:18:17.142 "digest": "sha384", 00:18:17.142 "dhgroup": "ffdhe4096" 00:18:17.142 } 00:18:17.142 } 00:18:17.142 ]' 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.142 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.143 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.402 08:56:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.970 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.228 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:18.486 00:18:18.486 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.486 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.486 08:56:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.745 { 00:18:18.745 "cntlid": 75, 00:18:18.745 "qid": 0, 00:18:18.745 "state": "enabled", 00:18:18.745 "listen_address": { 00:18:18.745 "trtype": "RDMA", 00:18:18.745 "adrfam": "IPv4", 00:18:18.745 "traddr": "192.168.100.8", 00:18:18.745 "trsvcid": "4420" 00:18:18.745 }, 00:18:18.745 "peer_address": { 00:18:18.745 "trtype": "RDMA", 00:18:18.745 "adrfam": "IPv4", 00:18:18.745 "traddr": "192.168.100.8", 00:18:18.745 "trsvcid": "57020" 00:18:18.745 }, 00:18:18.745 "auth": { 00:18:18.745 "state": "completed", 00:18:18.745 "digest": "sha384", 00:18:18.745 "dhgroup": "ffdhe4096" 00:18:18.745 } 00:18:18.745 } 00:18:18.745 ]' 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.745 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.003 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:19.569 08:56:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.828 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:20.087 00:18:20.087 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.087 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.087 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.346 { 00:18:20.346 "cntlid": 77, 00:18:20.346 "qid": 0, 00:18:20.346 "state": "enabled", 00:18:20.346 "listen_address": { 00:18:20.346 "trtype": "RDMA", 00:18:20.346 "adrfam": "IPv4", 00:18:20.346 "traddr": "192.168.100.8", 00:18:20.346 "trsvcid": "4420" 00:18:20.346 }, 00:18:20.346 "peer_address": { 00:18:20.346 "trtype": "RDMA", 00:18:20.346 "adrfam": "IPv4", 00:18:20.346 "traddr": "192.168.100.8", 00:18:20.346 "trsvcid": "54447" 00:18:20.346 }, 00:18:20.346 "auth": { 00:18:20.346 "state": "completed", 00:18:20.346 "digest": "sha384", 00:18:20.346 "dhgroup": "ffdhe4096" 00:18:20.346 } 00:18:20.346 } 00:18:20.346 ]' 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.346 08:56:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.605 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:21.173 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.432 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.691 08:56:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.691 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.691 08:56:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.691 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.950 { 00:18:21.950 "cntlid": 79, 00:18:21.950 "qid": 0, 00:18:21.950 "state": "enabled", 00:18:21.950 "listen_address": { 00:18:21.950 "trtype": "RDMA", 00:18:21.950 "adrfam": "IPv4", 00:18:21.950 "traddr": "192.168.100.8", 00:18:21.950 "trsvcid": "4420" 00:18:21.950 }, 00:18:21.950 "peer_address": { 00:18:21.950 "trtype": "RDMA", 00:18:21.950 "adrfam": "IPv4", 00:18:21.950 "traddr": "192.168.100.8", 00:18:21.950 "trsvcid": "49348" 00:18:21.950 }, 00:18:21.950 "auth": { 00:18:21.950 "state": "completed", 00:18:21.950 "digest": "sha384", 00:18:21.950 "dhgroup": "ffdhe4096" 00:18:21.950 } 00:18:21.950 } 00:18:21.950 ]' 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.950 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.209 08:56:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:22.783 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.042 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.301 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.560 00:18:23.560 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.560 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.560 08:56:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.820 { 00:18:23.820 "cntlid": 81, 00:18:23.820 "qid": 0, 00:18:23.820 "state": "enabled", 00:18:23.820 "listen_address": { 00:18:23.820 "trtype": "RDMA", 00:18:23.820 "adrfam": "IPv4", 00:18:23.820 "traddr": "192.168.100.8", 00:18:23.820 "trsvcid": "4420" 00:18:23.820 }, 00:18:23.820 "peer_address": { 00:18:23.820 "trtype": "RDMA", 00:18:23.820 "adrfam": "IPv4", 00:18:23.820 "traddr": "192.168.100.8", 00:18:23.820 "trsvcid": "49625" 00:18:23.820 }, 00:18:23.820 "auth": { 00:18:23.820 "state": "completed", 00:18:23.820 "digest": "sha384", 00:18:23.820 "dhgroup": "ffdhe6144" 00:18:23.820 } 00:18:23.820 } 00:18:23.820 ]' 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.820 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.082 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:24.649 08:56:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.649 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.650 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:24.909 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:25.168 00:18:25.168 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.168 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.168 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.426 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.427 { 00:18:25.427 "cntlid": 83, 00:18:25.427 "qid": 0, 00:18:25.427 "state": "enabled", 00:18:25.427 "listen_address": { 00:18:25.427 "trtype": "RDMA", 00:18:25.427 "adrfam": "IPv4", 00:18:25.427 "traddr": "192.168.100.8", 00:18:25.427 "trsvcid": "4420" 00:18:25.427 }, 00:18:25.427 "peer_address": { 00:18:25.427 "trtype": "RDMA", 00:18:25.427 "adrfam": "IPv4", 00:18:25.427 "traddr": "192.168.100.8", 00:18:25.427 "trsvcid": "59238" 00:18:25.427 }, 00:18:25.427 "auth": { 00:18:25.427 "state": "completed", 00:18:25.427 "digest": "sha384", 00:18:25.427 "dhgroup": "ffdhe6144" 00:18:25.427 } 00:18:25.427 } 00:18:25.427 ]' 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.427 08:56:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.685 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:26.253 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.510 08:56:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:26.510 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:27.074 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:27.074 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.074 { 00:18:27.074 "cntlid": 85, 00:18:27.074 "qid": 0, 00:18:27.074 "state": "enabled", 00:18:27.074 "listen_address": { 00:18:27.074 "trtype": "RDMA", 00:18:27.075 "adrfam": "IPv4", 00:18:27.075 "traddr": "192.168.100.8", 00:18:27.075 "trsvcid": "4420" 00:18:27.075 }, 00:18:27.075 "peer_address": { 00:18:27.075 "trtype": "RDMA", 00:18:27.075 "adrfam": "IPv4", 00:18:27.075 "traddr": "192.168.100.8", 00:18:27.075 "trsvcid": "43098" 00:18:27.075 }, 00:18:27.075 "auth": { 00:18:27.075 "state": "completed", 00:18:27.075 "digest": "sha384", 00:18:27.075 "dhgroup": "ffdhe6144" 00:18:27.075 } 00:18:27.075 } 00:18:27.075 ]' 00:18:27.075 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.075 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.075 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.333 08:56:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:27.900 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.159 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.417 08:56:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:28.675 00:18:28.675 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.675 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.675 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.932 { 00:18:28.932 "cntlid": 87, 00:18:28.932 "qid": 0, 00:18:28.932 "state": "enabled", 00:18:28.932 "listen_address": { 00:18:28.932 "trtype": "RDMA", 00:18:28.932 "adrfam": "IPv4", 00:18:28.932 "traddr": "192.168.100.8", 00:18:28.932 "trsvcid": "4420" 00:18:28.932 }, 00:18:28.932 "peer_address": { 00:18:28.932 "trtype": "RDMA", 00:18:28.932 "adrfam": "IPv4", 00:18:28.932 "traddr": "192.168.100.8", 00:18:28.932 "trsvcid": "39060" 00:18:28.932 }, 00:18:28.932 "auth": { 00:18:28.932 "state": "completed", 00:18:28.932 "digest": "sha384", 00:18:28.932 "dhgroup": "ffdhe6144" 00:18:28.932 } 00:18:28.932 } 00:18:28.932 ]' 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.932 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.190 08:56:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:29.753 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.009 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.010 08:56:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.579 00:18:30.579 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.579 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.579 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.837 { 00:18:30.837 "cntlid": 89, 00:18:30.837 "qid": 0, 00:18:30.837 "state": "enabled", 00:18:30.837 "listen_address": { 00:18:30.837 "trtype": "RDMA", 00:18:30.837 "adrfam": "IPv4", 00:18:30.837 "traddr": "192.168.100.8", 00:18:30.837 "trsvcid": "4420" 00:18:30.837 }, 00:18:30.837 "peer_address": { 00:18:30.837 "trtype": "RDMA", 00:18:30.837 "adrfam": "IPv4", 00:18:30.837 "traddr": "192.168.100.8", 00:18:30.837 "trsvcid": "46424" 00:18:30.837 }, 00:18:30.837 "auth": { 00:18:30.837 "state": "completed", 00:18:30.837 "digest": "sha384", 00:18:30.837 "dhgroup": "ffdhe8192" 00:18:30.837 } 00:18:30.837 } 00:18:30.837 ]' 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.837 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.095 08:56:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:31.661 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.919 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.484 00:18:32.484 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.484 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.484 08:56:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.741 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.742 { 00:18:32.742 "cntlid": 91, 00:18:32.742 "qid": 0, 00:18:32.742 "state": "enabled", 00:18:32.742 "listen_address": { 00:18:32.742 "trtype": "RDMA", 00:18:32.742 "adrfam": "IPv4", 00:18:32.742 "traddr": "192.168.100.8", 00:18:32.742 "trsvcid": "4420" 00:18:32.742 }, 00:18:32.742 "peer_address": { 00:18:32.742 "trtype": "RDMA", 00:18:32.742 "adrfam": "IPv4", 00:18:32.742 "traddr": "192.168.100.8", 00:18:32.742 "trsvcid": "44918" 00:18:32.742 }, 00:18:32.742 "auth": { 00:18:32.742 "state": "completed", 00:18:32.742 "digest": "sha384", 00:18:32.742 "dhgroup": "ffdhe8192" 00:18:32.742 } 00:18:32.742 } 00:18:32.742 ]' 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.742 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.999 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:33.652 08:56:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.652 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.909 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.473 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.473 { 00:18:34.473 "cntlid": 93, 00:18:34.473 "qid": 0, 00:18:34.473 "state": "enabled", 00:18:34.473 "listen_address": { 00:18:34.473 "trtype": "RDMA", 00:18:34.473 "adrfam": "IPv4", 00:18:34.473 "traddr": "192.168.100.8", 00:18:34.473 "trsvcid": "4420" 00:18:34.473 }, 00:18:34.473 "peer_address": { 00:18:34.473 "trtype": "RDMA", 00:18:34.473 "adrfam": "IPv4", 00:18:34.473 "traddr": "192.168.100.8", 00:18:34.473 "trsvcid": "57254" 00:18:34.473 }, 00:18:34.473 "auth": { 00:18:34.473 "state": "completed", 00:18:34.473 "digest": "sha384", 00:18:34.473 "dhgroup": "ffdhe8192" 00:18:34.473 } 00:18:34.473 } 00:18:34.473 ]' 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:34.473 08:56:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.473 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.473 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.730 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.730 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.730 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.730 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:35.293 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.551 08:56:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:35.551 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.810 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:35.810 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.810 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.069 00:18:36.069 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.069 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.069 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.327 { 00:18:36.327 "cntlid": 95, 00:18:36.327 "qid": 0, 00:18:36.327 "state": "enabled", 00:18:36.327 "listen_address": { 00:18:36.327 "trtype": "RDMA", 00:18:36.327 "adrfam": "IPv4", 00:18:36.327 "traddr": "192.168.100.8", 00:18:36.327 "trsvcid": "4420" 00:18:36.327 }, 00:18:36.327 "peer_address": { 00:18:36.327 "trtype": "RDMA", 00:18:36.327 "adrfam": "IPv4", 00:18:36.327 "traddr": "192.168.100.8", 00:18:36.327 "trsvcid": "44366" 00:18:36.327 }, 00:18:36.327 "auth": { 00:18:36.327 "state": "completed", 00:18:36.327 "digest": "sha384", 00:18:36.327 "dhgroup": "ffdhe8192" 00:18:36.327 } 00:18:36.327 } 00:18:36.327 ]' 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.327 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.585 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.585 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.585 08:56:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.585 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:37.151 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.410 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.411 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.411 08:56:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.411 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.411 08:56:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.669 00:18:37.669 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.669 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.669 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.927 { 00:18:37.927 "cntlid": 97, 00:18:37.927 "qid": 0, 00:18:37.927 "state": "enabled", 00:18:37.927 "listen_address": { 00:18:37.927 "trtype": "RDMA", 00:18:37.927 "adrfam": "IPv4", 00:18:37.927 "traddr": "192.168.100.8", 00:18:37.927 "trsvcid": "4420" 00:18:37.927 }, 00:18:37.927 "peer_address": { 00:18:37.927 "trtype": "RDMA", 00:18:37.927 "adrfam": "IPv4", 00:18:37.927 "traddr": "192.168.100.8", 00:18:37.927 "trsvcid": "36117" 00:18:37.927 }, 00:18:37.927 "auth": { 00:18:37.927 "state": "completed", 00:18:37.927 "digest": "sha512", 00:18:37.927 "dhgroup": "null" 00:18:37.927 } 00:18:37.927 } 00:18:37.927 ]' 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:37.927 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.184 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.184 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.184 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.184 08:57:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:38.749 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.007 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.266 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.266 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.525 { 00:18:39.525 "cntlid": 99, 00:18:39.525 "qid": 0, 00:18:39.525 "state": "enabled", 00:18:39.525 "listen_address": { 00:18:39.525 "trtype": "RDMA", 00:18:39.525 "adrfam": "IPv4", 00:18:39.525 "traddr": "192.168.100.8", 00:18:39.525 "trsvcid": "4420" 00:18:39.525 }, 00:18:39.525 "peer_address": { 00:18:39.525 "trtype": "RDMA", 00:18:39.525 "adrfam": "IPv4", 00:18:39.525 "traddr": "192.168.100.8", 00:18:39.525 "trsvcid": "41927" 00:18:39.525 }, 00:18:39.525 "auth": { 00:18:39.525 "state": "completed", 00:18:39.525 "digest": "sha512", 00:18:39.525 "dhgroup": "null" 00:18:39.525 } 00:18:39.525 } 00:18:39.525 ]' 00:18:39.525 08:57:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.525 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.525 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.525 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:39.525 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.783 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.783 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.783 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.783 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:40.351 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.610 08:57:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.610 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.868 00:18:40.868 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.868 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.868 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.126 { 00:18:41.126 "cntlid": 101, 00:18:41.126 "qid": 0, 00:18:41.126 "state": "enabled", 00:18:41.126 "listen_address": { 00:18:41.126 "trtype": "RDMA", 00:18:41.126 "adrfam": "IPv4", 00:18:41.126 "traddr": "192.168.100.8", 00:18:41.126 "trsvcid": "4420" 00:18:41.126 }, 00:18:41.126 "peer_address": { 00:18:41.126 "trtype": "RDMA", 00:18:41.126 "adrfam": "IPv4", 00:18:41.126 "traddr": "192.168.100.8", 00:18:41.126 "trsvcid": "42952" 00:18:41.126 }, 00:18:41.126 "auth": { 00:18:41.126 "state": "completed", 00:18:41.126 "digest": "sha512", 00:18:41.126 "dhgroup": "null" 00:18:41.126 } 00:18:41.126 } 00:18:41.126 ]' 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.126 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.385 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.385 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.385 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.385 08:57:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:41.952 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.211 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.470 08:57:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.470 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.470 08:57:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.470 00:18:42.470 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.470 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.470 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.729 { 00:18:42.729 "cntlid": 103, 00:18:42.729 "qid": 0, 00:18:42.729 "state": "enabled", 00:18:42.729 "listen_address": { 00:18:42.729 "trtype": "RDMA", 00:18:42.729 "adrfam": "IPv4", 00:18:42.729 "traddr": "192.168.100.8", 00:18:42.729 "trsvcid": "4420" 00:18:42.729 }, 00:18:42.729 "peer_address": { 00:18:42.729 "trtype": "RDMA", 00:18:42.729 "adrfam": "IPv4", 00:18:42.729 "traddr": "192.168.100.8", 00:18:42.729 "trsvcid": "36476" 00:18:42.729 }, 00:18:42.729 "auth": { 00:18:42.729 "state": "completed", 00:18:42.729 "digest": "sha512", 00:18:42.729 "dhgroup": "null" 00:18:42.729 } 00:18:42.729 } 00:18:42.729 ]' 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:42.729 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.987 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.987 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.987 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.987 08:57:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:43.555 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.814 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.072 00:18:44.073 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.073 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.073 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.331 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.331 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.331 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:44.331 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.331 08:57:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:44.332 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.332 { 00:18:44.332 "cntlid": 105, 00:18:44.332 "qid": 0, 00:18:44.332 "state": "enabled", 00:18:44.332 "listen_address": { 00:18:44.332 "trtype": "RDMA", 00:18:44.332 "adrfam": "IPv4", 00:18:44.332 "traddr": "192.168.100.8", 00:18:44.332 "trsvcid": "4420" 00:18:44.332 }, 00:18:44.332 "peer_address": { 00:18:44.332 "trtype": "RDMA", 00:18:44.332 "adrfam": "IPv4", 00:18:44.332 "traddr": "192.168.100.8", 00:18:44.332 "trsvcid": "42817" 00:18:44.332 }, 00:18:44.332 "auth": { 00:18:44.332 "state": "completed", 00:18:44.332 "digest": "sha512", 00:18:44.332 "dhgroup": "ffdhe2048" 00:18:44.332 } 00:18:44.332 } 00:18:44.332 ]' 00:18:44.332 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.332 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.332 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.591 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:44.591 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.591 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.591 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.591 08:57:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.591 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:45.158 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.417 08:57:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.676 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.935 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.935 { 00:18:45.935 "cntlid": 107, 00:18:45.935 "qid": 0, 00:18:45.935 "state": "enabled", 00:18:45.935 "listen_address": { 00:18:45.935 "trtype": "RDMA", 00:18:45.935 "adrfam": "IPv4", 00:18:45.935 "traddr": "192.168.100.8", 00:18:45.935 "trsvcid": "4420" 00:18:45.935 }, 00:18:45.935 "peer_address": { 00:18:45.935 "trtype": "RDMA", 00:18:45.935 "adrfam": "IPv4", 00:18:45.935 "traddr": "192.168.100.8", 00:18:45.935 "trsvcid": "45153" 00:18:45.935 }, 00:18:45.935 "auth": { 00:18:45.935 "state": "completed", 00:18:45.935 "digest": "sha512", 00:18:45.935 "dhgroup": "ffdhe2048" 00:18:45.935 } 00:18:45.935 } 00:18:45.935 ]' 00:18:45.935 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.194 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.453 08:57:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:47.020 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.021 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.279 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.538 00:18:47.538 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.538 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.538 08:57:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:47.538 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.538 { 00:18:47.538 "cntlid": 109, 00:18:47.538 "qid": 0, 00:18:47.538 "state": "enabled", 00:18:47.538 "listen_address": { 00:18:47.538 "trtype": "RDMA", 00:18:47.538 "adrfam": "IPv4", 00:18:47.538 "traddr": "192.168.100.8", 00:18:47.538 "trsvcid": "4420" 00:18:47.538 }, 00:18:47.538 "peer_address": { 00:18:47.538 "trtype": "RDMA", 00:18:47.538 "adrfam": "IPv4", 00:18:47.538 "traddr": "192.168.100.8", 00:18:47.538 "trsvcid": "53676" 00:18:47.538 }, 00:18:47.538 "auth": { 00:18:47.538 "state": "completed", 00:18:47.538 "digest": "sha512", 00:18:47.538 "dhgroup": "ffdhe2048" 00:18:47.538 } 00:18:47.538 } 00:18:47.538 ]' 00:18:47.539 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.797 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.055 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:48.623 08:57:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.623 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.881 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.140 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.140 { 00:18:49.140 "cntlid": 111, 00:18:49.140 "qid": 0, 00:18:49.140 "state": "enabled", 00:18:49.140 "listen_address": { 00:18:49.140 "trtype": "RDMA", 00:18:49.140 "adrfam": "IPv4", 00:18:49.140 "traddr": "192.168.100.8", 00:18:49.140 "trsvcid": "4420" 00:18:49.140 }, 00:18:49.140 "peer_address": { 00:18:49.140 "trtype": "RDMA", 00:18:49.140 "adrfam": "IPv4", 00:18:49.140 "traddr": "192.168.100.8", 00:18:49.140 "trsvcid": "46293" 00:18:49.140 }, 00:18:49.140 "auth": { 00:18:49.140 "state": "completed", 00:18:49.140 "digest": "sha512", 00:18:49.140 "dhgroup": "ffdhe2048" 00:18:49.140 } 00:18:49.140 } 00:18:49.140 ]' 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.140 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.398 08:57:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:49.964 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.223 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.481 08:57:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.739 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.739 { 00:18:50.739 "cntlid": 113, 00:18:50.739 "qid": 0, 00:18:50.739 "state": "enabled", 00:18:50.739 "listen_address": { 00:18:50.739 "trtype": "RDMA", 00:18:50.739 "adrfam": "IPv4", 00:18:50.739 "traddr": "192.168.100.8", 00:18:50.739 "trsvcid": "4420" 00:18:50.739 }, 00:18:50.739 "peer_address": { 00:18:50.739 "trtype": "RDMA", 00:18:50.739 "adrfam": "IPv4", 00:18:50.739 "traddr": "192.168.100.8", 00:18:50.739 "trsvcid": "35342" 00:18:50.739 }, 00:18:50.739 "auth": { 00:18:50.739 "state": "completed", 00:18:50.739 "digest": "sha512", 00:18:50.739 "dhgroup": "ffdhe3072" 00:18:50.739 } 00:18:50.739 } 00:18:50.739 ]' 00:18:50.739 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.998 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.256 08:57:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:51.823 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.823 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.824 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.082 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.340 00:18:52.340 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.340 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.340 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.599 { 00:18:52.599 "cntlid": 115, 00:18:52.599 "qid": 0, 00:18:52.599 "state": "enabled", 00:18:52.599 "listen_address": { 00:18:52.599 "trtype": "RDMA", 00:18:52.599 "adrfam": "IPv4", 00:18:52.599 "traddr": "192.168.100.8", 00:18:52.599 "trsvcid": "4420" 00:18:52.599 }, 00:18:52.599 "peer_address": { 00:18:52.599 "trtype": "RDMA", 00:18:52.599 "adrfam": "IPv4", 00:18:52.599 "traddr": "192.168.100.8", 00:18:52.599 "trsvcid": "38509" 00:18:52.599 }, 00:18:52.599 "auth": { 00:18:52.599 "state": "completed", 00:18:52.599 "digest": "sha512", 00:18:52.599 "dhgroup": "ffdhe3072" 00:18:52.599 } 00:18:52.599 } 00:18:52.599 ]' 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.599 08:57:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.599 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.599 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.599 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.599 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.599 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.858 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.425 08:57:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.684 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.943 00:18:53.943 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:53.943 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.943 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.202 { 00:18:54.202 "cntlid": 117, 00:18:54.202 "qid": 0, 00:18:54.202 "state": "enabled", 00:18:54.202 "listen_address": { 00:18:54.202 "trtype": "RDMA", 00:18:54.202 "adrfam": "IPv4", 00:18:54.202 "traddr": "192.168.100.8", 00:18:54.202 "trsvcid": "4420" 00:18:54.202 }, 00:18:54.202 "peer_address": { 00:18:54.202 "trtype": "RDMA", 00:18:54.202 "adrfam": "IPv4", 00:18:54.202 "traddr": "192.168.100.8", 00:18:54.202 "trsvcid": "44725" 00:18:54.202 }, 00:18:54.202 "auth": { 00:18:54.202 "state": "completed", 00:18:54.202 "digest": "sha512", 00:18:54.202 "dhgroup": "ffdhe3072" 00:18:54.202 } 00:18:54.202 } 00:18:54.202 ]' 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.202 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.461 08:57:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.028 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.286 08:57:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.545 00:18:55.545 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.545 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.545 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.803 { 00:18:55.803 "cntlid": 119, 00:18:55.803 "qid": 0, 00:18:55.803 "state": "enabled", 00:18:55.803 "listen_address": { 00:18:55.803 "trtype": "RDMA", 00:18:55.803 "adrfam": "IPv4", 00:18:55.803 "traddr": "192.168.100.8", 00:18:55.803 "trsvcid": "4420" 00:18:55.803 }, 00:18:55.803 "peer_address": { 00:18:55.803 "trtype": "RDMA", 00:18:55.803 "adrfam": "IPv4", 00:18:55.803 "traddr": "192.168.100.8", 00:18:55.803 "trsvcid": "45463" 00:18:55.803 }, 00:18:55.803 "auth": { 00:18:55.803 "state": "completed", 00:18:55.803 "digest": "sha512", 00:18:55.803 "dhgroup": "ffdhe3072" 00:18:55.803 } 00:18:55.803 } 00:18:55.803 ]' 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.803 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.062 08:57:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.676 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.934 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.935 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.935 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.935 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.193 00:18:57.193 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.193 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.193 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.451 { 00:18:57.451 "cntlid": 121, 00:18:57.451 "qid": 0, 00:18:57.451 "state": "enabled", 00:18:57.451 "listen_address": { 00:18:57.451 "trtype": "RDMA", 00:18:57.451 "adrfam": "IPv4", 00:18:57.451 "traddr": "192.168.100.8", 00:18:57.451 "trsvcid": "4420" 00:18:57.451 }, 00:18:57.451 "peer_address": { 00:18:57.451 "trtype": "RDMA", 00:18:57.451 "adrfam": "IPv4", 00:18:57.451 "traddr": "192.168.100.8", 00:18:57.451 "trsvcid": "42508" 00:18:57.451 }, 00:18:57.451 "auth": { 00:18:57.451 "state": "completed", 00:18:57.451 "digest": "sha512", 00:18:57.451 "dhgroup": "ffdhe4096" 00:18:57.451 } 00:18:57.451 } 00:18:57.451 ]' 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.451 08:57:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.709 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:18:58.276 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.534 08:57:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.534 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.792 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.050 { 00:18:59.050 "cntlid": 123, 00:18:59.050 "qid": 0, 00:18:59.050 "state": "enabled", 00:18:59.050 "listen_address": { 00:18:59.050 "trtype": "RDMA", 00:18:59.050 "adrfam": "IPv4", 00:18:59.050 "traddr": "192.168.100.8", 00:18:59.050 "trsvcid": "4420" 00:18:59.050 }, 00:18:59.050 "peer_address": { 00:18:59.050 "trtype": "RDMA", 00:18:59.050 "adrfam": "IPv4", 00:18:59.050 "traddr": "192.168.100.8", 00:18:59.050 "trsvcid": "42087" 00:18:59.050 }, 00:18:59.050 "auth": { 00:18:59.050 "state": "completed", 00:18:59.050 "digest": "sha512", 00:18:59.050 "dhgroup": "ffdhe4096" 00:18:59.050 } 00:18:59.050 } 00:18:59.050 ]' 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.050 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.309 08:57:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:18:59.875 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.133 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.391 08:57:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.648 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.648 { 00:19:00.648 "cntlid": 125, 00:19:00.648 "qid": 0, 00:19:00.648 "state": "enabled", 00:19:00.648 "listen_address": { 00:19:00.648 "trtype": "RDMA", 00:19:00.648 "adrfam": "IPv4", 00:19:00.648 "traddr": "192.168.100.8", 00:19:00.648 "trsvcid": "4420" 00:19:00.648 }, 00:19:00.648 "peer_address": { 00:19:00.648 "trtype": "RDMA", 00:19:00.648 "adrfam": "IPv4", 00:19:00.648 "traddr": "192.168.100.8", 00:19:00.648 "trsvcid": "60810" 00:19:00.648 }, 00:19:00.648 "auth": { 00:19:00.648 "state": "completed", 00:19:00.648 "digest": "sha512", 00:19:00.648 "dhgroup": "ffdhe4096" 00:19:00.648 } 00:19:00.648 } 00:19:00.648 ]' 00:19:00.648 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.905 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.906 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.164 08:57:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.730 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.988 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.246 00:19:02.246 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.246 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.246 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.504 { 00:19:02.504 "cntlid": 127, 00:19:02.504 "qid": 0, 00:19:02.504 "state": "enabled", 00:19:02.504 "listen_address": { 00:19:02.504 "trtype": "RDMA", 00:19:02.504 "adrfam": "IPv4", 00:19:02.504 "traddr": "192.168.100.8", 00:19:02.504 "trsvcid": "4420" 00:19:02.504 }, 00:19:02.504 "peer_address": { 00:19:02.504 "trtype": "RDMA", 00:19:02.504 "adrfam": "IPv4", 00:19:02.504 "traddr": "192.168.100.8", 00:19:02.504 "trsvcid": "59730" 00:19:02.504 }, 00:19:02.504 "auth": { 00:19:02.504 "state": "completed", 00:19:02.504 "digest": "sha512", 00:19:02.504 "dhgroup": "ffdhe4096" 00:19:02.504 } 00:19:02.504 } 00:19:02.504 ]' 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.504 08:57:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.763 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.330 08:57:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.589 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.848 00:19:03.848 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.848 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.848 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.107 { 00:19:04.107 "cntlid": 129, 00:19:04.107 "qid": 0, 00:19:04.107 "state": "enabled", 00:19:04.107 "listen_address": { 00:19:04.107 "trtype": "RDMA", 00:19:04.107 "adrfam": "IPv4", 00:19:04.107 "traddr": "192.168.100.8", 00:19:04.107 "trsvcid": "4420" 00:19:04.107 }, 00:19:04.107 "peer_address": { 00:19:04.107 "trtype": "RDMA", 00:19:04.107 "adrfam": "IPv4", 00:19:04.107 "traddr": "192.168.100.8", 00:19:04.107 "trsvcid": "53850" 00:19:04.107 }, 00:19:04.107 "auth": { 00:19:04.107 "state": "completed", 00:19:04.107 "digest": "sha512", 00:19:04.107 "dhgroup": "ffdhe6144" 00:19:04.107 } 00:19:04.107 } 00:19:04.107 ]' 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.107 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.365 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.365 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.365 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.365 08:57:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:19:04.932 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.191 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.450 08:57:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.450 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.450 08:57:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.709 00:19:05.709 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.709 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.709 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:05.967 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.967 { 00:19:05.967 "cntlid": 131, 00:19:05.967 "qid": 0, 00:19:05.967 "state": "enabled", 00:19:05.967 "listen_address": { 00:19:05.967 "trtype": "RDMA", 00:19:05.967 "adrfam": "IPv4", 00:19:05.968 "traddr": "192.168.100.8", 00:19:05.968 "trsvcid": "4420" 00:19:05.968 }, 00:19:05.968 "peer_address": { 00:19:05.968 "trtype": "RDMA", 00:19:05.968 "adrfam": "IPv4", 00:19:05.968 "traddr": "192.168.100.8", 00:19:05.968 "trsvcid": "44511" 00:19:05.968 }, 00:19:05.968 "auth": { 00:19:05.968 "state": "completed", 00:19:05.968 "digest": "sha512", 00:19:05.968 "dhgroup": "ffdhe6144" 00:19:05.968 } 00:19:05.968 } 00:19:05.968 ]' 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.968 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.227 08:57:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:06.794 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.053 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.311 00:19:07.311 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.311 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.312 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.571 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.571 08:57:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.571 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:07.571 08:57:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.571 { 00:19:07.571 "cntlid": 133, 00:19:07.571 "qid": 0, 00:19:07.571 "state": "enabled", 00:19:07.571 "listen_address": { 00:19:07.571 "trtype": "RDMA", 00:19:07.571 "adrfam": "IPv4", 00:19:07.571 "traddr": "192.168.100.8", 00:19:07.571 "trsvcid": "4420" 00:19:07.571 }, 00:19:07.571 "peer_address": { 00:19:07.571 "trtype": "RDMA", 00:19:07.571 "adrfam": "IPv4", 00:19:07.571 "traddr": "192.168.100.8", 00:19:07.571 "trsvcid": "41178" 00:19:07.571 }, 00:19:07.571 "auth": { 00:19:07.571 "state": "completed", 00:19:07.571 "digest": "sha512", 00:19:07.571 "dhgroup": "ffdhe6144" 00:19:07.571 } 00:19:07.571 } 00:19:07.571 ]' 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:07.571 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.828 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.828 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.829 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.829 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:19:08.396 08:57:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:08.654 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.222 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.222 { 00:19:09.222 "cntlid": 135, 00:19:09.222 "qid": 0, 00:19:09.222 "state": "enabled", 00:19:09.222 "listen_address": { 00:19:09.222 "trtype": "RDMA", 00:19:09.222 "adrfam": "IPv4", 00:19:09.222 "traddr": "192.168.100.8", 00:19:09.222 "trsvcid": "4420" 00:19:09.222 }, 00:19:09.222 "peer_address": { 00:19:09.222 "trtype": "RDMA", 00:19:09.222 "adrfam": "IPv4", 00:19:09.222 "traddr": "192.168.100.8", 00:19:09.222 "trsvcid": "58045" 00:19:09.222 }, 00:19:09.222 "auth": { 00:19:09.222 "state": "completed", 00:19:09.222 "digest": "sha512", 00:19:09.222 "dhgroup": "ffdhe6144" 00:19:09.222 } 00:19:09.222 } 00:19:09.222 ]' 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.222 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.481 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:09.481 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.481 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.481 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.481 08:57:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.481 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:19:10.048 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.307 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.566 08:57:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:10.824 00:19:10.824 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.824 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.824 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.082 { 00:19:11.082 "cntlid": 137, 00:19:11.082 "qid": 0, 00:19:11.082 "state": "enabled", 00:19:11.082 "listen_address": { 00:19:11.082 "trtype": "RDMA", 00:19:11.082 "adrfam": "IPv4", 00:19:11.082 "traddr": "192.168.100.8", 00:19:11.082 "trsvcid": "4420" 00:19:11.082 }, 00:19:11.082 "peer_address": { 00:19:11.082 "trtype": "RDMA", 00:19:11.082 "adrfam": "IPv4", 00:19:11.082 "traddr": "192.168.100.8", 00:19:11.082 "trsvcid": "55560" 00:19:11.082 }, 00:19:11.082 "auth": { 00:19:11.082 "state": "completed", 00:19:11.082 "digest": "sha512", 00:19:11.082 "dhgroup": "ffdhe8192" 00:19:11.082 } 00:19:11.082 } 00:19:11.082 ]' 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.082 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.340 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.340 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.340 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.340 08:57:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:19:11.908 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.167 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.426 08:57:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:12.685 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.944 { 00:19:12.944 "cntlid": 139, 00:19:12.944 "qid": 0, 00:19:12.944 "state": "enabled", 00:19:12.944 "listen_address": { 00:19:12.944 "trtype": "RDMA", 00:19:12.944 "adrfam": "IPv4", 00:19:12.944 "traddr": "192.168.100.8", 00:19:12.944 "trsvcid": "4420" 00:19:12.944 }, 00:19:12.944 "peer_address": { 00:19:12.944 "trtype": "RDMA", 00:19:12.944 "adrfam": "IPv4", 00:19:12.944 "traddr": "192.168.100.8", 00:19:12.944 "trsvcid": "34629" 00:19:12.944 }, 00:19:12.944 "auth": { 00:19:12.944 "state": "completed", 00:19:12.944 "digest": "sha512", 00:19:12.944 "dhgroup": "ffdhe8192" 00:19:12.944 } 00:19:12.944 } 00:19:12.944 ]' 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.944 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.202 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.202 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.203 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.203 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.203 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.203 08:57:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MTcyMjJiN2E0NGJhNzIzOWRjM2FkNjc5ODk1YzhjODK/tvc3: --dhchap-ctrl-secret DHHC-1:02:MWUyZjQwZmMzZWMzNWFmNmU1OWY1NmMwMWNhYWUyMjFiNWZiN2IyYTg1NDA3MjY49rCtNQ==: 00:19:13.770 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.029 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.288 08:57:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:14.547 00:19:14.547 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.547 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.547 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.805 { 00:19:14.805 "cntlid": 141, 00:19:14.805 "qid": 0, 00:19:14.805 "state": "enabled", 00:19:14.805 "listen_address": { 00:19:14.805 "trtype": "RDMA", 00:19:14.805 "adrfam": "IPv4", 00:19:14.805 "traddr": "192.168.100.8", 00:19:14.805 "trsvcid": "4420" 00:19:14.805 }, 00:19:14.805 "peer_address": { 00:19:14.805 "trtype": "RDMA", 00:19:14.805 "adrfam": "IPv4", 00:19:14.805 "traddr": "192.168.100.8", 00:19:14.805 "trsvcid": "33650" 00:19:14.805 }, 00:19:14.805 "auth": { 00:19:14.805 "state": "completed", 00:19:14.805 "digest": "sha512", 00:19:14.805 "dhgroup": "ffdhe8192" 00:19:14.805 } 00:19:14.805 } 00:19:14.805 ]' 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:14.805 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.064 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.064 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.064 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.064 08:57:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:NTE5ODY0ZTVhNjVmNzY2NTZhMzc0MDQ4ZjZhZWU4MWZmYjFkMjUwMTJhNGZiZDViedavPg==: --dhchap-ctrl-secret DHHC-1:01:OGRiMjNhNGU1NWFlMGM0MjA1ZmI0ZWFhODdlNTc5MzMQ7ZlB: 00:19:15.631 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.889 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.148 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.407 00:19:16.407 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.407 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.407 08:57:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.665 { 00:19:16.665 "cntlid": 143, 00:19:16.665 "qid": 0, 00:19:16.665 "state": "enabled", 00:19:16.665 "listen_address": { 00:19:16.665 "trtype": "RDMA", 00:19:16.665 "adrfam": "IPv4", 00:19:16.665 "traddr": "192.168.100.8", 00:19:16.665 "trsvcid": "4420" 00:19:16.665 }, 00:19:16.665 "peer_address": { 00:19:16.665 "trtype": "RDMA", 00:19:16.665 "adrfam": "IPv4", 00:19:16.665 "traddr": "192.168.100.8", 00:19:16.665 "trsvcid": "45856" 00:19:16.665 }, 00:19:16.665 "auth": { 00:19:16.665 "state": "completed", 00:19:16.665 "digest": "sha512", 00:19:16.665 "dhgroup": "ffdhe8192" 00:19:16.665 } 00:19:16.665 } 00:19:16.665 ]' 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:16.665 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.924 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.924 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.924 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.924 08:57:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:19:17.491 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:17.750 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.009 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.577 00:19:18.577 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.577 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.577 08:57:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.577 { 00:19:18.577 "cntlid": 145, 00:19:18.577 "qid": 0, 00:19:18.577 "state": "enabled", 00:19:18.577 "listen_address": { 00:19:18.577 "trtype": "RDMA", 00:19:18.577 "adrfam": "IPv4", 00:19:18.577 "traddr": "192.168.100.8", 00:19:18.577 "trsvcid": "4420" 00:19:18.577 }, 00:19:18.577 "peer_address": { 00:19:18.577 "trtype": "RDMA", 00:19:18.577 "adrfam": "IPv4", 00:19:18.577 "traddr": "192.168.100.8", 00:19:18.577 "trsvcid": "55159" 00:19:18.577 }, 00:19:18.577 "auth": { 00:19:18.577 "state": "completed", 00:19:18.577 "digest": "sha512", 00:19:18.577 "dhgroup": "ffdhe8192" 00:19:18.577 } 00:19:18.577 } 00:19:18.577 ]' 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:18.577 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.835 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.835 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.835 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.835 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:ZTNiY2RkY2Y2NjkzYjZlNmNmNTkyYjBjZDRlYmU0ZTZlMmQ3MzNmYzk0YTE0NmZiK9xOsw==: --dhchap-ctrl-secret DHHC-1:03:ZGZlY2E1NGIxMTZhZTRmOThmNTMzODg2MWY0MDNhMTVhMTFhZjNiYzQ4NTY4ZWU2ZmFiODhlNzQ0YWRmYjNiZmCcivk=: 00:19:19.403 08:57:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:19.661 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:19.662 08:57:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:19.662 08:57:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:51.805 request: 00:19:51.805 { 00:19:51.805 "name": "nvme0", 00:19:51.805 "trtype": "rdma", 00:19:51.805 "traddr": "192.168.100.8", 00:19:51.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:51.805 "adrfam": "ipv4", 00:19:51.805 "trsvcid": "4420", 00:19:51.805 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.805 "dhchap_key": "key2", 00:19:51.805 "method": "bdev_nvme_attach_controller", 00:19:51.805 "req_id": 1 00:19:51.805 } 00:19:51.805 Got JSON-RPC error response 00:19:51.805 response: 00:19:51.805 { 00:19:51.805 "code": -5, 00:19:51.805 "message": "Input/output error" 00:19:51.805 } 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:51.805 08:58:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:51.805 request: 00:19:51.805 { 00:19:51.805 "name": "nvme0", 00:19:51.805 "trtype": "rdma", 00:19:51.805 "traddr": "192.168.100.8", 00:19:51.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:19:51.805 "adrfam": "ipv4", 00:19:51.805 "trsvcid": "4420", 00:19:51.805 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.805 "dhchap_key": "key1", 00:19:51.805 "dhchap_ctrlr_key": "ckey2", 00:19:51.805 "method": "bdev_nvme_attach_controller", 00:19:51.805 "req_id": 1 00:19:51.805 } 00:19:51.805 Got JSON-RPC error response 00:19:51.805 response: 00:19:51.805 { 00:19:51.805 "code": -5, 00:19:51.805 "message": "Input/output error" 00:19:51.805 } 00:19:51.805 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:19:51.805 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:19:51.805 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:19:51.805 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:19:51.805 08:58:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key1 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.806 08:58:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.881 request: 00:20:23.881 { 00:20:23.881 "name": "nvme0", 00:20:23.881 "trtype": "rdma", 00:20:23.881 "traddr": "192.168.100.8", 00:20:23.881 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:23.881 "adrfam": "ipv4", 00:20:23.881 "trsvcid": "4420", 00:20:23.881 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:23.881 "dhchap_key": "key1", 00:20:23.881 "dhchap_ctrlr_key": "ckey1", 00:20:23.881 "method": "bdev_nvme_attach_controller", 00:20:23.881 "req_id": 1 00:20:23.881 } 00:20:23.881 Got JSON-RPC error response 00:20:23.881 response: 00:20:23.881 { 00:20:23.881 "code": -5, 00:20:23.881 "message": "Input/output error" 00:20:23.881 } 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1339434 ']' 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1339434' 00:20:23.881 killing process with pid 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1339434 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@723 -- # xtrace_disable 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1371440 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1371440 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1371440 ']' 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:23.881 08:58:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@729 -- # xtrace_disable 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1371440 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # '[' -z 1371440 ']' 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@835 -- # local max_retries=100 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@839 -- # xtrace_disable 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@863 -- # return 0 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.881 08:58:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:23.881 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.881 { 00:20:23.881 "cntlid": 1, 00:20:23.881 "qid": 0, 00:20:23.881 "state": "enabled", 00:20:23.881 "listen_address": { 00:20:23.881 "trtype": "RDMA", 00:20:23.881 "adrfam": "IPv4", 00:20:23.881 "traddr": "192.168.100.8", 00:20:23.881 "trsvcid": "4420" 00:20:23.881 }, 00:20:23.881 "peer_address": { 00:20:23.881 "trtype": "RDMA", 00:20:23.881 "adrfam": "IPv4", 00:20:23.881 "traddr": "192.168.100.8", 00:20:23.881 "trsvcid": "52266" 00:20:23.881 }, 00:20:23.881 "auth": { 00:20:23.881 "state": "completed", 00:20:23.881 "digest": "sha512", 00:20:23.881 "dhgroup": "ffdhe8192" 00:20:23.881 } 00:20:23.881 } 00:20:23.881 ]' 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.881 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.882 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:23.882 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.882 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.882 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.882 08:58:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.882 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid 801347e8-3fd0-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:YTNiODA4NjViMGMyNWVjMmY5MjE3YmM3NTIxYjBlM2Q3MDY5MGQxODE1NmM2ZmEyZGEzM2ZhNGZjYzBiNTlmNnIJGK8=: 00:20:24.140 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.399 08:58:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.474 request: 00:20:56.474 { 00:20:56.474 "name": "nvme0", 00:20:56.474 "trtype": "rdma", 00:20:56.474 "traddr": "192.168.100.8", 00:20:56.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:20:56.474 "adrfam": "ipv4", 00:20:56.474 "trsvcid": "4420", 00:20:56.474 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:56.474 "dhchap_key": "key3", 00:20:56.474 "method": "bdev_nvme_attach_controller", 00:20:56.474 "req_id": 1 00:20:56.474 } 00:20:56.474 Got JSON-RPC error response 00:20:56.474 response: 00:20:56.474 { 00:20:56.474 "code": -5, 00:20:56.474 "message": "Input/output error" 00:20:56.474 } 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.474 08:59:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:28.564 request: 00:21:28.564 { 00:21:28.564 "name": "nvme0", 00:21:28.564 "trtype": "rdma", 00:21:28.564 "traddr": "192.168.100.8", 00:21:28.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:28.564 "adrfam": "ipv4", 00:21:28.564 "trsvcid": "4420", 00:21:28.564 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.564 "dhchap_key": "key3", 00:21:28.564 "method": "bdev_nvme_attach_controller", 00:21:28.564 "req_id": 1 00:21:28.564 } 00:21:28.564 Got JSON-RPC error response 00:21:28.564 response: 00:21:28.564 { 00:21:28.564 "code": -5, 00:21:28.564 "message": "Input/output error" 00:21:28.564 } 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.564 08:59:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:28.564 request: 00:21:28.564 { 00:21:28.564 "name": "nvme0", 00:21:28.564 "trtype": "rdma", 00:21:28.564 "traddr": "192.168.100.8", 00:21:28.564 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562", 00:21:28.564 "adrfam": "ipv4", 00:21:28.564 "trsvcid": "4420", 00:21:28.564 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.564 "dhchap_key": "key0", 00:21:28.564 "dhchap_ctrlr_key": "key1", 00:21:28.564 "method": "bdev_nvme_attach_controller", 00:21:28.564 "req_id": 1 00:21:28.564 } 00:21:28.564 Got JSON-RPC error response 00:21:28.564 response: 00:21:28.564 { 00:21:28.564 "code": -5, 00:21:28.564 "message": "Input/output error" 00:21:28.564 } 00:21:28.564 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:28.564 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:28.564 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:28.565 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1339517 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1339517 ']' 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1339517 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1339517 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1339517' 00:21:28.565 killing process with pid 1339517 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1339517 00:21:28.565 08:59:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1339517 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:28.565 rmmod nvme_rdma 00:21:28.565 rmmod nvme_fabrics 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1371440 ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@949 -- # '[' -z 1371440 ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@953 -- # kill -0 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # uname 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1371440' 00:21:28.565 killing process with pid 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@968 -- # kill 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@973 -- # wait 1371440 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.67K /tmp/spdk.key-sha256.AjE /tmp/spdk.key-sha384.Chs /tmp/spdk.key-sha512.tTG /tmp/spdk.key-sha512.5Se /tmp/spdk.key-sha384.2D6 /tmp/spdk.key-sha256.jvX '' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf-auth.log 00:21:28.565 00:21:28.565 real 4m18.670s 00:21:28.565 user 9m19.753s 00:21:28.565 sys 0m19.364s 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1125 -- # xtrace_disable 00:21:28.565 08:59:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.565 ************************************ 00:21:28.565 END TEST nvmf_auth_target 00:21:28.565 ************************************ 00:21:28.565 08:59:49 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:21:28.565 08:59:49 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:21:28.565 08:59:49 nvmf_rdma -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:28.565 08:59:49 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:21:28.565 08:59:49 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:21:28.565 08:59:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:28.565 ************************************ 00:21:28.565 START TEST nvmf_fuzz 00:21:28.565 ************************************ 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:21:28.565 * Looking for test storage... 00:21:28.565 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.565 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:21:28.566 08:59:49 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:32.759 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:32.759 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@377 -- # modinfo irdma 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:32.759 Found net devices under 0000:af:00.0: cvl_0_0 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:32.759 Found net devices under 0000:af:00.1: cvl_0_1 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@420 -- # rdma_device_init 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # uname 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:32.759 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:21:32.760 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:32.760 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:21:32.760 altname enp175s0f0np0 00:21:32.760 altname ens801f0np0 00:21:32.760 inet 192.168.100.8/24 scope global cvl_0_0 00:21:32.760 valid_lft forever preferred_lft forever 00:21:32.760 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:21:32.760 valid_lft forever preferred_lft forever 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:21:32.760 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:21:32.760 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:21:32.760 altname enp175s0f1np1 00:21:32.760 altname ens801f1np1 00:21:32.760 inet 192.168.100.9/24 scope global cvl_0_1 00:21:32.760 valid_lft forever preferred_lft forever 00:21:32.760 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:21:32.760 valid_lft forever preferred_lft forever 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@104 -- # echo cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@105 -- # continue 2 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:32.760 192.168.100.9' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:32.760 192.168.100.9' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # head -n 1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # tail -n +2 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:32.760 192.168.100.9' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # head -n 1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1385055 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1385055 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@830 -- # '[' -z 1385055 ']' 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@835 -- # local max_retries=100 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@839 -- # xtrace_disable 00:21:32.760 08:59:54 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@863 -- # return 0 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.327 Malloc0 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.327 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:21:33.328 08:59:55 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:22:05.489 Fuzzing completed. Shutting down the fuzz application 00:22:05.489 00:22:05.489 Dumping successful admin opcodes: 00:22:05.489 8, 9, 10, 24, 00:22:05.489 Dumping successful io opcodes: 00:22:05.489 0, 9, 00:22:05.489 NS: 0x200003af1f00 I/O qp, Total commands completed: 1205375, total successful commands: 7084, random_seed: 1090567168 00:22:05.489 NS: 0x200003af1f00 admin qp, Total commands completed: 151943, total successful commands: 1223, random_seed: 691307200 00:22:05.489 09:00:26 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:22:05.489 Fuzzing completed. Shutting down the fuzz application 00:22:05.489 00:22:05.489 Dumping successful admin opcodes: 00:22:05.489 24, 00:22:05.489 Dumping successful io opcodes: 00:22:05.489 00:22:05.489 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1406151678 00:22:05.489 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1406214300 00:22:05.489 09:00:27 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.489 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:05.490 rmmod nvme_rdma 00:22:05.490 rmmod nvme_fabrics 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 1385055 ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@949 -- # '[' -z 1385055 ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@953 -- # kill -0 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # uname 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1385055' 00:22:05.490 killing process with pid 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@968 -- # kill 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@973 -- # wait 1385055 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:22:05.490 00:22:05.490 real 0m38.309s 00:22:05.490 user 0m52.040s 00:22:05.490 sys 0m17.509s 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@1125 -- # xtrace_disable 00:22:05.490 09:00:27 nvmf_rdma.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:22:05.490 ************************************ 00:22:05.490 END TEST nvmf_fuzz 00:22:05.490 ************************************ 00:22:05.490 09:00:27 nvmf_rdma -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:05.490 09:00:27 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:22:05.490 09:00:27 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:22:05.490 09:00:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:05.490 ************************************ 00:22:05.490 START TEST nvmf_multiconnection 00:22:05.490 ************************************ 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:22:05.490 * Looking for test storage... 00:22:05.490 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:22:05.490 09:00:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:10.766 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:10.766 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@377 -- # modinfo irdma 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:10.766 Found net devices under 0000:af:00.0: cvl_0_0 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:10.766 Found net devices under 0000:af:00.1: cvl_0_1 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@420 -- # rdma_device_init 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # uname 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:10.766 09:00:32 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:10.766 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:22:10.767 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:10.767 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:22:10.767 altname enp175s0f0np0 00:22:10.767 altname ens801f0np0 00:22:10.767 inet 192.168.100.8/24 scope global cvl_0_0 00:22:10.767 valid_lft forever preferred_lft forever 00:22:10.767 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:22:10.767 valid_lft forever preferred_lft forever 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:22:10.767 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:22:10.767 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:22:10.767 altname enp175s0f1np1 00:22:10.767 altname ens801f1np1 00:22:10.767 inet 192.168.100.9/24 scope global cvl_0_1 00:22:10.767 valid_lft forever preferred_lft forever 00:22:10.767 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:22:10.767 valid_lft forever preferred_lft forever 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@104 -- # echo cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@105 -- # continue 2 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:10.767 192.168.100.9' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:10.767 192.168.100.9' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # head -n 1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:10.767 192.168.100.9' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # tail -n +2 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # head -n 1 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@723 -- # xtrace_disable 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=1393993 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 1393993 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@830 -- # '[' -z 1393993 ']' 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local max_retries=100 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@839 -- # xtrace_disable 00:22:10.767 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:10.767 [2024-06-09 09:00:33.199471] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:22:10.767 [2024-06-09 09:00:33.199512] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.767 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.767 [2024-06-09 09:00:33.254560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.027 [2024-06-09 09:00:33.334521] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.027 [2024-06-09 09:00:33.334553] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.027 [2024-06-09 09:00:33.334559] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.027 [2024-06-09 09:00:33.334566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.027 [2024-06-09 09:00:33.334570] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.027 [2024-06-09 09:00:33.334612] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.027 [2024-06-09 09:00:33.334710] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.027 [2024-06-09 09:00:33.334796] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.027 [2024-06-09 09:00:33.334797] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.594 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:22:11.594 09:00:33 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@863 -- # return 0 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@729 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 [2024-06-09 09:00:34.057978] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x23188f0/0x2317f30) succeed. 00:22:11.594 [2024-06-09 09:00:34.066922] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x2319ca0/0x23184b0) succeed. 00:22:11.594 [2024-06-09 09:00:34.066949] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 Malloc1 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.594 [2024-06-09 09:00:34.126008] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:11.594 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.595 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.595 Malloc2 00:22:11.595 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.595 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:11.595 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.595 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 Malloc3 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 Malloc4 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 Malloc5 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 Malloc6 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 Malloc7 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.853 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:11.854 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 Malloc8 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 Malloc9 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 Malloc10 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 Malloc11 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.113 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:12.372 09:00:34 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:12.372 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:12.372 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.372 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:12.372 09:00:34 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK1 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.271 09:00:36 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:22:14.529 09:00:37 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:14.529 09:00:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:14.529 09:00:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:14.529 09:00:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:14.529 09:00:37 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK2 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:17.059 09:00:39 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK3 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:18.963 09:00:41 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK4 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:21.490 09:00:43 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK5 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:23.391 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:23.392 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.650 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:23.650 09:00:45 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK6 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:25.565 09:00:47 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:22:25.821 09:00:48 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:25.821 09:00:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:25.821 09:00:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:25.821 09:00:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:25.821 09:00:48 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK7 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.732 09:00:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:22:27.995 09:00:50 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:27.995 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:27.995 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.995 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:27.995 09:00:50 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:29.893 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:29.893 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:29.893 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK8 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:30.150 09:00:52 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK9 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:32.680 09:00:54 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:34.592 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:34.592 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:34.592 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK10 00:22:34.592 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:34.592 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:34.593 09:00:56 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:34.593 09:00:56 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.593 09:00:56 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:34.851 09:00:57 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:34.851 09:00:57 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local i=0 00:22:34.851 09:00:57 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:22:34.851 09:00:57 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:22:34.851 09:00:57 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # sleep 2 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # grep -c SPDK11 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # return 0 00:22:36.755 09:00:59 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:36.755 [global] 00:22:36.755 thread=1 00:22:36.755 invalidate=1 00:22:36.755 rw=read 00:22:36.755 time_based=1 00:22:36.755 runtime=10 00:22:36.755 ioengine=libaio 00:22:36.755 direct=1 00:22:36.755 bs=262144 00:22:36.755 iodepth=64 00:22:36.755 norandommap=1 00:22:36.755 numjobs=1 00:22:36.755 00:22:36.755 [job0] 00:22:36.755 filename=/dev/nvme0n1 00:22:36.755 [job1] 00:22:36.755 filename=/dev/nvme10n1 00:22:36.755 [job2] 00:22:36.755 filename=/dev/nvme11n1 00:22:36.755 [job3] 00:22:36.755 filename=/dev/nvme2n1 00:22:36.755 [job4] 00:22:36.755 filename=/dev/nvme3n1 00:22:36.755 [job5] 00:22:36.755 filename=/dev/nvme4n1 00:22:36.755 [job6] 00:22:36.755 filename=/dev/nvme5n1 00:22:36.755 [job7] 00:22:36.755 filename=/dev/nvme6n1 00:22:36.755 [job8] 00:22:36.755 filename=/dev/nvme7n1 00:22:36.755 [job9] 00:22:36.755 filename=/dev/nvme8n1 00:22:36.755 [job10] 00:22:36.755 filename=/dev/nvme9n1 00:22:37.044 Could not set queue depth (nvme0n1) 00:22:37.044 Could not set queue depth (nvme10n1) 00:22:37.044 Could not set queue depth (nvme11n1) 00:22:37.044 Could not set queue depth (nvme2n1) 00:22:37.044 Could not set queue depth (nvme3n1) 00:22:37.044 Could not set queue depth (nvme4n1) 00:22:37.044 Could not set queue depth (nvme5n1) 00:22:37.044 Could not set queue depth (nvme6n1) 00:22:37.044 Could not set queue depth (nvme7n1) 00:22:37.044 Could not set queue depth (nvme8n1) 00:22:37.044 Could not set queue depth (nvme9n1) 00:22:37.303 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.303 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.303 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.303 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.303 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:37.304 fio-3.35 00:22:37.304 Starting 11 threads 00:22:49.536 00:22:49.536 job0: (groupid=0, jobs=1): err= 0: pid=1398651: Sun Jun 9 09:01:09 2024 00:22:49.536 read: IOPS=1133, BW=283MiB/s (297MB/s)(2838MiB/10020msec) 00:22:49.536 slat (usec): min=10, max=18642, avg=873.29, stdev=2136.28 00:22:49.536 clat (msec): min=11, max=110, avg=55.56, stdev=21.71 00:22:49.536 lat (msec): min=11, max=111, avg=56.43, stdev=22.13 00:22:49.536 clat percentiles (msec): 00:22:49.536 | 1.00th=[ 22], 5.00th=[ 24], 10.00th=[ 25], 20.00th=[ 26], 00:22:49.536 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 58], 00:22:49.536 | 70.00th=[ 68], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 94], 00:22:49.536 | 99.00th=[ 96], 99.50th=[ 97], 99.90th=[ 105], 99.95th=[ 108], 00:22:49.536 | 99.99th=[ 110] 00:22:49.536 bw ( KiB/s): min=167936, max=654336, per=6.05%, avg=289024.00, stdev=131718.35, samples=20 00:22:49.536 iops : min= 656, max= 2556, avg=1129.00, stdev=514.52, samples=20 00:22:49.536 lat (msec) : 20=0.82%, 50=25.13%, 100=73.87%, 250=0.18% 00:22:49.536 cpu : usr=0.27%, sys=3.77%, ctx=2687, majf=0, minf=4097 00:22:49.536 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:49.536 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.536 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.536 issued rwts: total=11353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.536 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.537 job1: (groupid=0, jobs=1): err= 0: pid=1398652: Sun Jun 9 09:01:09 2024 00:22:49.537 read: IOPS=3838, BW=960MiB/s (1006MB/s)(9616MiB/10019msec) 00:22:49.537 slat (usec): min=10, max=9655, avg=256.12, stdev=551.52 00:22:49.537 clat (usec): min=529, max=74010, avg=16400.27, stdev=6866.25 00:22:49.537 lat (usec): min=565, max=78929, avg=16656.40, stdev=6968.49 00:22:49.537 clat percentiles (usec): 00:22:49.537 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12256], 20.00th=[12518], 00:22:49.537 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:22:49.537 | 70.00th=[13566], 80.00th=[24511], 90.00th=[25822], 95.00th=[26870], 00:22:49.537 | 99.00th=[39584], 99.50th=[44827], 99.90th=[66847], 99.95th=[69731], 00:22:49.537 | 99.99th=[73925] 00:22:49.537 bw ( KiB/s): min=511488, max=1264640, per=20.57%, avg=983104.35, stdev=302984.73, samples=20 00:22:49.537 iops : min= 1998, max= 4940, avg=3840.25, stdev=1183.53, samples=20 00:22:49.537 lat (usec) : 750=0.01%, 1000=0.02% 00:22:49.537 lat (msec) : 2=0.11%, 4=0.12%, 10=0.46%, 20=73.81%, 50=25.24% 00:22:49.537 lat (msec) : 100=0.23% 00:22:49.537 cpu : usr=0.55%, sys=8.74%, ctx=9113, majf=0, minf=4097 00:22:49.537 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:49.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.537 issued rwts: total=38462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.537 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.537 job2: (groupid=0, jobs=1): err= 0: pid=1398653: Sun Jun 9 09:01:09 2024 00:22:49.537 read: IOPS=967, BW=242MiB/s (254MB/s)(2429MiB/10038msec) 00:22:49.537 slat (usec): min=10, max=30903, avg=1026.23, stdev=2755.00 00:22:49.537 clat (msec): min=10, max=121, avg=65.05, stdev=14.14 00:22:49.537 lat (msec): min=10, max=125, avg=66.08, stdev=14.55 00:22:49.538 clat percentiles (msec): 00:22:49.538 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 54], 00:22:49.538 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 67], 00:22:49.538 | 70.00th=[ 69], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 95], 00:22:49.538 | 99.00th=[ 99], 99.50th=[ 103], 99.90th=[ 110], 99.95th=[ 115], 00:22:49.538 | 99.99th=[ 122] 00:22:49.538 bw ( KiB/s): min=163328, max=297472, per=5.17%, avg=247065.60, stdev=47292.10, samples=20 00:22:49.538 iops : min= 638, max= 1162, avg=965.10, stdev=184.73, samples=20 00:22:49.538 lat (msec) : 20=0.21%, 50=2.89%, 100=96.15%, 250=0.75% 00:22:49.538 cpu : usr=0.37%, sys=3.90%, ctx=2095, majf=0, minf=4097 00:22:49.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.538 issued rwts: total=9714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.538 job3: (groupid=0, jobs=1): err= 0: pid=1398654: Sun Jun 9 09:01:09 2024 00:22:49.538 read: IOPS=1726, BW=432MiB/s (453MB/s)(4337MiB/10045msec) 00:22:49.538 slat (usec): min=9, max=12680, avg=566.19, stdev=1294.81 00:22:49.538 clat (usec): min=9367, max=91278, avg=36460.31, stdev=10944.75 00:22:49.538 lat (msec): min=9, max=101, avg=37.03, stdev=11.15 00:22:49.538 clat percentiles (usec): 00:22:49.538 | 1.00th=[23462], 5.00th=[24511], 10.00th=[25035], 20.00th=[25560], 00:22:49.538 | 30.00th=[26084], 40.00th=[27132], 50.00th=[36963], 60.00th=[39060], 00:22:49.538 | 70.00th=[44827], 80.00th=[49546], 90.00th=[50594], 95.00th=[51643], 00:22:49.538 | 99.00th=[56361], 99.50th=[58459], 99.90th=[81265], 99.95th=[84411], 00:22:49.538 | 99.99th=[91751] 00:22:49.538 bw ( KiB/s): min=319488, max=626176, per=9.26%, avg=442444.80, stdev=121074.54, samples=20 00:22:49.538 iops : min= 1248, max= 2446, avg=1728.30, stdev=472.95, samples=20 00:22:49.538 lat (msec) : 10=0.02%, 20=0.37%, 50=86.45%, 100=13.15% 00:22:49.538 cpu : usr=0.36%, sys=5.09%, ctx=4184, majf=0, minf=4097 00:22:49.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:49.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.538 issued rwts: total=17347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.539 job4: (groupid=0, jobs=1): err= 0: pid=1398655: Sun Jun 9 09:01:09 2024 00:22:49.539 read: IOPS=1150, BW=288MiB/s (302MB/s)(2887MiB/10037msec) 00:22:49.539 slat (usec): min=10, max=39260, avg=850.63, stdev=2743.46 00:22:49.539 clat (usec): min=656, max=134326, avg=54738.20, stdev=19047.29 00:22:49.539 lat (usec): min=683, max=134379, avg=55588.83, stdev=19491.57 00:22:49.539 clat percentiles (msec): 00:22:49.539 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 37], 20.00th=[ 40], 00:22:49.539 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 51], 60.00th=[ 52], 00:22:49.539 | 70.00th=[ 59], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 94], 00:22:49.539 | 99.00th=[ 99], 99.50th=[ 104], 99.90th=[ 121], 99.95th=[ 133], 00:22:49.539 | 99.99th=[ 134] 00:22:49.539 bw ( KiB/s): min=166400, max=535552, per=6.15%, avg=293964.80, stdev=95163.63, samples=20 00:22:49.539 iops : min= 650, max= 2092, avg=1148.30, stdev=371.73, samples=20 00:22:49.539 lat (usec) : 750=0.01%, 1000=0.03% 00:22:49.539 lat (msec) : 2=0.19%, 4=0.16%, 20=0.14%, 50=49.76%, 100=49.03% 00:22:49.539 lat (msec) : 250=0.69% 00:22:49.539 cpu : usr=0.28%, sys=3.63%, ctx=2855, majf=0, minf=4097 00:22:49.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.539 issued rwts: total=11546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.539 job5: (groupid=0, jobs=1): err= 0: pid=1398656: Sun Jun 9 09:01:09 2024 00:22:49.539 read: IOPS=2998, BW=750MiB/s (786MB/s)(7529MiB/10044msec) 00:22:49.539 slat (usec): min=9, max=24910, avg=326.98, stdev=948.18 00:22:49.539 clat (usec): min=10958, max=98982, avg=20998.44, stdev=11541.01 00:22:49.539 lat (usec): min=11155, max=99026, avg=21325.42, stdev=11725.56 00:22:49.539 clat percentiles (usec): 00:22:49.539 | 1.00th=[12256], 5.00th=[12649], 10.00th=[12780], 20.00th=[13173], 00:22:49.539 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13960], 60.00th=[16581], 00:22:49.539 | 70.00th=[26608], 80.00th=[27395], 90.00th=[33817], 95.00th=[44827], 00:22:49.539 | 99.00th=[67634], 99.50th=[68682], 99.90th=[83362], 99.95th=[90702], 00:22:49.539 | 99.99th=[95945] 00:22:49.539 bw ( KiB/s): min=262131, max=1223680, per=16.09%, avg=769356.15, stdev=346272.87, samples=20 00:22:49.539 iops : min= 1023, max= 4780, avg=3005.25, stdev=1352.70, samples=20 00:22:49.539 lat (msec) : 20=60.17%, 50=37.46%, 100=2.36% 00:22:49.539 cpu : usr=0.46%, sys=7.11%, ctx=7056, majf=0, minf=4097 00:22:49.539 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.540 issued rwts: total=30117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.540 job6: (groupid=0, jobs=1): err= 0: pid=1398657: Sun Jun 9 09:01:09 2024 00:22:49.540 read: IOPS=1222, BW=306MiB/s (320MB/s)(3069MiB/10045msec) 00:22:49.540 slat (usec): min=9, max=39278, avg=799.54, stdev=2637.72 00:22:49.540 clat (msec): min=10, max=123, avg=51.52, stdev=18.18 00:22:49.540 lat (msec): min=10, max=123, avg=52.32, stdev=18.60 00:22:49.540 clat percentiles (msec): 00:22:49.540 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 38], 00:22:49.540 | 30.00th=[ 44], 40.00th=[ 46], 50.00th=[ 54], 60.00th=[ 55], 00:22:49.540 | 70.00th=[ 56], 80.00th=[ 66], 90.00th=[ 73], 95.00th=[ 94], 00:22:49.540 | 99.00th=[ 97], 99.50th=[ 102], 99.90th=[ 116], 99.95th=[ 122], 00:22:49.540 | 99.99th=[ 124] 00:22:49.540 bw ( KiB/s): min=172032, max=608768, per=6.54%, avg=312678.40, stdev=110387.47, samples=20 00:22:49.540 iops : min= 672, max= 2378, avg=1221.40, stdev=431.20, samples=20 00:22:49.540 lat (msec) : 20=0.41%, 50=41.83%, 100=57.13%, 250=0.63% 00:22:49.540 cpu : usr=0.42%, sys=4.10%, ctx=2896, majf=0, minf=4097 00:22:49.540 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:49.540 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.540 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.540 issued rwts: total=12277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.540 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.540 job7: (groupid=0, jobs=1): err= 0: pid=1398658: Sun Jun 9 09:01:09 2024 00:22:49.540 read: IOPS=2027, BW=507MiB/s (531MB/s)(5091MiB/10045msec) 00:22:49.540 slat (usec): min=9, max=15746, avg=479.76, stdev=1203.74 00:22:49.540 clat (usec): min=1012, max=94064, avg=31064.08, stdev=15808.36 00:22:49.540 lat (usec): min=1137, max=94095, avg=31543.84, stdev=16074.93 00:22:49.540 clat percentiles (usec): 00:22:49.540 | 1.00th=[11338], 5.00th=[11863], 10.00th=[12256], 20.00th=[12518], 00:22:49.540 | 30.00th=[12911], 40.00th=[25035], 50.00th=[36439], 60.00th=[38536], 00:22:49.540 | 70.00th=[43779], 80.00th=[49021], 90.00th=[50070], 95.00th=[51119], 00:22:49.540 | 99.00th=[55837], 99.50th=[56886], 99.90th=[71828], 99.95th=[86508], 00:22:49.540 | 99.99th=[93848] 00:22:49.541 bw ( KiB/s): min=315904, max=1298432, per=10.87%, avg=519628.80, stdev=327695.74, samples=20 00:22:49.541 iops : min= 1234, max= 5072, avg=2029.80, stdev=1280.06, samples=20 00:22:49.541 lat (msec) : 2=0.09%, 4=0.14%, 10=0.33%, 20=35.32%, 50=52.89% 00:22:49.541 lat (msec) : 100=11.23% 00:22:49.541 cpu : usr=0.47%, sys=5.95%, ctx=4966, majf=0, minf=3347 00:22:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.541 issued rwts: total=20362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.541 job8: (groupid=0, jobs=1): err= 0: pid=1398659: Sun Jun 9 09:01:09 2024 00:22:49.541 read: IOPS=969, BW=242MiB/s (254MB/s)(2432MiB/10038msec) 00:22:49.541 slat (usec): min=14, max=26759, avg=1024.04, stdev=2575.07 00:22:49.541 clat (msec): min=11, max=117, avg=64.95, stdev=14.06 00:22:49.541 lat (msec): min=11, max=119, avg=65.97, stdev=14.45 00:22:49.541 clat percentiles (msec): 00:22:49.541 | 1.00th=[ 47], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 54], 00:22:49.541 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 60], 60.00th=[ 67], 00:22:49.541 | 70.00th=[ 69], 80.00th=[ 81], 90.00th=[ 85], 95.00th=[ 94], 00:22:49.541 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 117], 99.95th=[ 117], 00:22:49.541 | 99.99th=[ 118] 00:22:49.541 bw ( KiB/s): min=159744, max=301056, per=5.18%, avg=247474.90, stdev=47560.23, samples=20 00:22:49.541 iops : min= 624, max= 1176, avg=966.65, stdev=185.78, samples=20 00:22:49.541 lat (msec) : 20=0.17%, 50=3.21%, 100=95.80%, 250=0.82% 00:22:49.541 cpu : usr=0.40%, sys=4.19%, ctx=2050, majf=0, minf=4097 00:22:49.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:49.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.541 issued rwts: total=9729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.541 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.541 job9: (groupid=0, jobs=1): err= 0: pid=1398660: Sun Jun 9 09:01:09 2024 00:22:49.541 read: IOPS=1549, BW=387MiB/s (406MB/s)(3888MiB/10038msec) 00:22:49.541 slat (usec): min=10, max=50424, avg=627.09, stdev=2186.01 00:22:49.541 clat (msec): min=10, max=144, avg=40.64, stdev=21.07 00:22:49.541 lat (msec): min=10, max=146, avg=41.27, stdev=21.48 00:22:49.541 clat percentiles (msec): 00:22:49.541 | 1.00th=[ 26], 5.00th=[ 26], 10.00th=[ 27], 20.00th=[ 27], 00:22:49.541 | 30.00th=[ 28], 40.00th=[ 28], 50.00th=[ 29], 60.00th=[ 37], 00:22:49.541 | 70.00th=[ 41], 80.00th=[ 59], 90.00th=[ 81], 95.00th=[ 90], 00:22:49.542 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 140], 00:22:49.542 | 99.99th=[ 146] 00:22:49.542 bw ( KiB/s): min=175616, max=598016, per=8.29%, avg=396492.80, stdev=171923.41, samples=20 00:22:49.542 iops : min= 686, max= 2336, avg=1548.80, stdev=671.58, samples=20 00:22:49.542 lat (msec) : 20=0.15%, 50=78.43%, 100=21.09%, 250=0.33% 00:22:49.542 cpu : usr=0.36%, sys=4.64%, ctx=3861, majf=0, minf=4097 00:22:49.542 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:49.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.542 issued rwts: total=15552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.542 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.542 job10: (groupid=0, jobs=1): err= 0: pid=1398661: Sun Jun 9 09:01:09 2024 00:22:49.542 read: IOPS=1106, BW=277MiB/s (290MB/s)(2778MiB/10043msec) 00:22:49.542 slat (usec): min=10, max=43396, avg=867.00, stdev=2858.74 00:22:49.542 clat (usec): min=772, max=124673, avg=56930.56, stdev=19671.59 00:22:49.542 lat (usec): min=799, max=124913, avg=57797.55, stdev=20139.38 00:22:49.542 clat percentiles (msec): 00:22:49.542 | 1.00th=[ 17], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 49], 00:22:49.542 | 30.00th=[ 50], 40.00th=[ 51], 50.00th=[ 52], 60.00th=[ 59], 00:22:49.542 | 70.00th=[ 67], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 94], 00:22:49.542 | 99.00th=[ 97], 99.50th=[ 105], 99.90th=[ 121], 99.95th=[ 124], 00:22:49.542 | 99.99th=[ 125] 00:22:49.542 bw ( KiB/s): min=168448, max=542208, per=5.92%, avg=282854.40, stdev=87246.69, samples=20 00:22:49.542 iops : min= 658, max= 2118, avg=1104.90, stdev=340.81, samples=20 00:22:49.542 lat (usec) : 1000=0.05% 00:22:49.543 lat (msec) : 2=0.04%, 10=0.13%, 20=1.06%, 50=38.34%, 100=59.83% 00:22:49.543 lat (msec) : 250=0.55% 00:22:49.543 cpu : usr=0.33%, sys=3.56%, ctx=2961, majf=0, minf=4097 00:22:49.543 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:49.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:49.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:49.543 issued rwts: total=11112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:49.543 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:49.543 00:22:49.543 Run status group 0 (all jobs): 00:22:49.543 READ: bw=4668MiB/s (4895MB/s), 242MiB/s-960MiB/s (254MB/s-1006MB/s), io=45.8GiB (49.2GB), run=10019-10045msec 00:22:49.543 00:22:49.543 Disk stats (read/write): 00:22:49.543 nvme0n1: ios=22635/0, merge=0/0, ticks=1235062/0, in_queue=1235062, util=97.95% 00:22:49.543 nvme10n1: ios=76861/0, merge=0/0, ticks=1223666/0, in_queue=1223666, util=98.09% 00:22:49.543 nvme11n1: ios=19346/0, merge=0/0, ticks=1232974/0, in_queue=1232974, util=98.19% 00:22:49.543 nvme2n1: ios=34579/0, merge=0/0, ticks=1227527/0, in_queue=1227527, util=98.29% 00:22:49.543 nvme3n1: ios=23007/0, merge=0/0, ticks=1231890/0, in_queue=1231890, util=98.31% 00:22:49.543 nvme4n1: ios=60145/0, merge=0/0, ticks=1223682/0, in_queue=1223682, util=98.57% 00:22:49.543 nvme5n1: ios=24461/0, merge=0/0, ticks=1229659/0, in_queue=1229659, util=98.69% 00:22:49.543 nvme6n1: ios=40607/0, merge=0/0, ticks=1228128/0, in_queue=1228128, util=98.77% 00:22:49.543 nvme7n1: ios=19361/0, merge=0/0, ticks=1234373/0, in_queue=1234373, util=99.04% 00:22:49.543 nvme8n1: ios=30982/0, merge=0/0, ticks=1229869/0, in_queue=1229869, util=99.17% 00:22:49.543 nvme9n1: ios=22115/0, merge=0/0, ticks=1232159/0, in_queue=1232159, util=99.25% 00:22:49.543 09:01:09 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:49.543 [global] 00:22:49.543 thread=1 00:22:49.543 invalidate=1 00:22:49.543 rw=randwrite 00:22:49.543 time_based=1 00:22:49.543 runtime=10 00:22:49.543 ioengine=libaio 00:22:49.543 direct=1 00:22:49.543 bs=262144 00:22:49.543 iodepth=64 00:22:49.543 norandommap=1 00:22:49.543 numjobs=1 00:22:49.543 00:22:49.543 [job0] 00:22:49.543 filename=/dev/nvme0n1 00:22:49.543 [job1] 00:22:49.543 filename=/dev/nvme10n1 00:22:49.543 [job2] 00:22:49.543 filename=/dev/nvme11n1 00:22:49.543 [job3] 00:22:49.543 filename=/dev/nvme2n1 00:22:49.543 [job4] 00:22:49.543 filename=/dev/nvme3n1 00:22:49.543 [job5] 00:22:49.543 filename=/dev/nvme4n1 00:22:49.543 [job6] 00:22:49.543 filename=/dev/nvme5n1 00:22:49.543 [job7] 00:22:49.543 filename=/dev/nvme6n1 00:22:49.543 [job8] 00:22:49.543 filename=/dev/nvme7n1 00:22:49.543 [job9] 00:22:49.543 filename=/dev/nvme8n1 00:22:49.543 [job10] 00:22:49.543 filename=/dev/nvme9n1 00:22:49.544 Could not set queue depth (nvme0n1) 00:22:49.544 Could not set queue depth (nvme10n1) 00:22:49.544 Could not set queue depth (nvme11n1) 00:22:49.544 Could not set queue depth (nvme2n1) 00:22:49.544 Could not set queue depth (nvme3n1) 00:22:49.544 Could not set queue depth (nvme4n1) 00:22:49.544 Could not set queue depth (nvme5n1) 00:22:49.544 Could not set queue depth (nvme6n1) 00:22:49.544 Could not set queue depth (nvme7n1) 00:22:49.544 Could not set queue depth (nvme8n1) 00:22:49.544 Could not set queue depth (nvme9n1) 00:22:49.544 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:49.544 fio-3.35 00:22:49.544 Starting 11 threads 00:22:59.558 00:22:59.558 job0: (groupid=0, jobs=1): err= 0: pid=1400407: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=944, BW=236MiB/s (248MB/s)(2381MiB/10082msec); 0 zone resets 00:22:59.558 slat (usec): min=24, max=53302, avg=1009.00, stdev=3978.50 00:22:59.558 clat (msec): min=2, max=184, avg=66.72, stdev=20.49 00:22:59.558 lat (msec): min=2, max=184, avg=67.73, stdev=21.12 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 23], 5.00th=[ 44], 10.00th=[ 45], 20.00th=[ 47], 00:22:59.558 | 30.00th=[ 53], 40.00th=[ 59], 50.00th=[ 61], 60.00th=[ 69], 00:22:59.558 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 104], 00:22:59.558 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 140], 99.95th=[ 155], 00:22:59.558 | 99.99th=[ 184] 00:22:59.558 bw ( KiB/s): min=157184, max=360448, per=6.84%, avg=242201.60, stdev=66643.17, samples=20 00:22:59.558 iops : min= 614, max= 1408, avg=946.10, stdev=260.32, samples=20 00:22:59.558 lat (msec) : 4=0.20%, 10=0.17%, 20=0.25%, 50=23.91%, 100=66.75% 00:22:59.558 lat (msec) : 250=8.73% 00:22:59.558 cpu : usr=4.02%, sys=3.13%, ctx=1935, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,9524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job1: (groupid=0, jobs=1): err= 0: pid=1400427: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=1116, BW=279MiB/s (293MB/s)(2796MiB/10022msec); 0 zone resets 00:22:59.558 slat (usec): min=20, max=33001, avg=891.69, stdev=2951.09 00:22:59.558 clat (msec): min=21, max=117, avg=56.43, stdev=15.25 00:22:59.558 lat (msec): min=25, max=120, avg=57.33, stdev=15.71 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 42], 20.00th=[ 47], 00:22:59.558 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:22:59.558 | 70.00th=[ 59], 80.00th=[ 65], 90.00th=[ 80], 95.00th=[ 83], 00:22:59.558 | 99.00th=[ 97], 99.50th=[ 101], 99.90th=[ 115], 99.95th=[ 117], 00:22:59.558 | 99.99th=[ 117] 00:22:59.558 bw ( KiB/s): min=159744, max=541184, per=8.04%, avg=284723.20, stdev=79175.74, samples=20 00:22:59.558 iops : min= 624, max= 2114, avg=1112.20, stdev=309.28, samples=20 00:22:59.558 lat (msec) : 50=34.96%, 100=64.58%, 250=0.46% 00:22:59.558 cpu : usr=2.20%, sys=3.01%, ctx=2295, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,11185,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job2: (groupid=0, jobs=1): err= 0: pid=1400431: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=880, BW=220MiB/s (231MB/s)(2219MiB/10079msec); 0 zone resets 00:22:59.558 slat (usec): min=21, max=57980, avg=1123.80, stdev=3878.70 00:22:59.558 clat (msec): min=33, max=199, avg=71.53, stdev=16.78 00:22:59.558 lat (msec): min=33, max=199, avg=72.66, stdev=17.40 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 55], 5.00th=[ 56], 10.00th=[ 57], 20.00th=[ 57], 00:22:59.558 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 75], 00:22:59.558 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 100], 95.00th=[ 104], 00:22:59.558 | 99.00th=[ 107], 99.50th=[ 123], 99.90th=[ 146], 99.95th=[ 146], 00:22:59.558 | 99.99th=[ 201] 00:22:59.558 bw ( KiB/s): min=159232, max=287744, per=6.37%, avg=225551.95, stdev=48839.43, samples=20 00:22:59.558 iops : min= 622, max= 1124, avg=881.05, stdev=190.80, samples=20 00:22:59.558 lat (msec) : 50=0.19%, 100=90.50%, 250=9.31% 00:22:59.558 cpu : usr=1.80%, sys=2.86%, ctx=1927, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,8874,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job3: (groupid=0, jobs=1): err= 0: pid=1400432: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=922, BW=231MiB/s (242MB/s)(2325MiB/10079msec); 0 zone resets 00:22:59.558 slat (usec): min=22, max=45918, avg=1071.16, stdev=3439.05 00:22:59.558 clat (msec): min=32, max=156, avg=68.26, stdev=18.84 00:22:59.558 lat (msec): min=32, max=197, avg=69.33, stdev=19.36 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 49], 20.00th=[ 56], 00:22:59.558 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 66], 00:22:59.558 | 70.00th=[ 80], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 104], 00:22:59.558 | 99.00th=[ 115], 99.50th=[ 132], 99.90th=[ 153], 99.95th=[ 157], 00:22:59.558 | 99.99th=[ 157] 00:22:59.558 bw ( KiB/s): min=145920, max=348160, per=6.67%, avg=236441.60, stdev=62476.50, samples=20 00:22:59.558 iops : min= 570, max= 1360, avg=923.60, stdev=244.05, samples=20 00:22:59.558 lat (msec) : 50=11.57%, 100=79.51%, 250=8.92% 00:22:59.558 cpu : usr=1.98%, sys=2.78%, ctx=1986, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,9300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job4: (groupid=0, jobs=1): err= 0: pid=1400433: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=1452, BW=363MiB/s (381MB/s)(3660MiB/10079msec); 0 zone resets 00:22:59.558 slat (usec): min=16, max=79884, avg=668.87, stdev=2382.58 00:22:59.558 clat (usec): min=1109, max=175966, avg=43376.29, stdev=19591.78 00:22:59.558 lat (usec): min=1660, max=176020, avg=44045.17, stdev=19940.48 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 16], 5.00th=[ 17], 10.00th=[ 17], 20.00th=[ 30], 00:22:59.558 | 30.00th=[ 34], 40.00th=[ 41], 50.00th=[ 45], 60.00th=[ 46], 00:22:59.558 | 70.00th=[ 51], 80.00th=[ 55], 90.00th=[ 62], 95.00th=[ 67], 00:22:59.558 | 99.00th=[ 105], 99.50th=[ 109], 99.90th=[ 171], 99.95th=[ 174], 00:22:59.558 | 99.99th=[ 176] 00:22:59.558 bw ( KiB/s): min=157696, max=967168, per=10.53%, avg=373145.60, stdev=170177.03, samples=20 00:22:59.558 iops : min= 616, max= 3778, avg=1457.60, stdev=664.75, samples=20 00:22:59.558 lat (msec) : 2=0.07%, 4=0.07%, 10=0.24%, 20=15.09%, 50=53.99% 00:22:59.558 lat (msec) : 100=27.76%, 250=2.79% 00:22:59.558 cpu : usr=2.90%, sys=4.13%, ctx=2886, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,14639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job5: (groupid=0, jobs=1): err= 0: pid=1400434: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=2408, BW=602MiB/s (631MB/s)(6069MiB/10079msec); 0 zone resets 00:22:59.558 slat (usec): min=15, max=79512, avg=404.91, stdev=2090.85 00:22:59.558 clat (usec): min=518, max=182694, avg=26155.14, stdev=19863.53 00:22:59.558 lat (usec): min=723, max=184097, avg=26560.05, stdev=20239.13 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:22:59.558 | 30.00th=[ 17], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:22:59.558 | 70.00th=[ 19], 80.00th=[ 44], 90.00th=[ 60], 95.00th=[ 63], 00:22:59.558 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 171], 99.95th=[ 176], 00:22:59.558 | 99.99th=[ 180] 00:22:59.558 bw ( KiB/s): min=135168, max=958464, per=17.50%, avg=619827.20, stdev=328060.55, samples=20 00:22:59.558 iops : min= 528, max= 3744, avg=2421.20, stdev=1281.49, samples=20 00:22:59.558 lat (usec) : 750=0.02%, 1000=0.02% 00:22:59.558 lat (msec) : 2=0.09%, 4=0.09%, 10=0.90%, 20=76.19%, 50=9.59% 00:22:59.558 lat (msec) : 100=11.56%, 250=1.54% 00:22:59.558 cpu : usr=3.60%, sys=5.48%, ctx=4005, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,24275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job6: (groupid=0, jobs=1): err= 0: pid=1400435: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=1699, BW=425MiB/s (446MB/s)(4254MiB/10011msec); 0 zone resets 00:22:59.558 slat (usec): min=15, max=22570, avg=575.73, stdev=1730.64 00:22:59.558 clat (usec): min=5265, max=82342, avg=37064.21, stdev=17479.55 00:22:59.558 lat (usec): min=5320, max=83165, avg=37639.95, stdev=17808.73 00:22:59.558 clat percentiles (usec): 00:22:59.558 | 1.00th=[13698], 5.00th=[14746], 10.00th=[15664], 20.00th=[16581], 00:22:59.558 | 30.00th=[17433], 40.00th=[29754], 50.00th=[43779], 60.00th=[46924], 00:22:59.558 | 70.00th=[50594], 80.00th=[52691], 90.00th=[59507], 95.00th=[61080], 00:22:59.558 | 99.00th=[66323], 99.50th=[67634], 99.90th=[72877], 99.95th=[74974], 00:22:59.558 | 99.99th=[79168] 00:22:59.558 bw ( KiB/s): min=265216, max=1011712, per=11.94%, avg=422858.11, stdev=251119.68, samples=19 00:22:59.558 iops : min= 1036, max= 3952, avg=1651.79, stdev=980.94, samples=19 00:22:59.558 lat (msec) : 10=0.02%, 20=35.58%, 50=31.18%, 100=33.21% 00:22:59.558 cpu : usr=3.09%, sys=4.12%, ctx=3141, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,17016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.558 job7: (groupid=0, jobs=1): err= 0: pid=1400436: Sun Jun 9 09:01:20 2024 00:22:59.558 write: IOPS=1019, BW=255MiB/s (267MB/s)(2569MiB/10080msec); 0 zone resets 00:22:59.558 slat (usec): min=21, max=40802, avg=965.94, stdev=3096.89 00:22:59.558 clat (msec): min=21, max=183, avg=61.78, stdev=16.52 00:22:59.558 lat (msec): min=21, max=183, avg=62.75, stdev=16.97 00:22:59.558 clat percentiles (msec): 00:22:59.558 | 1.00th=[ 42], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 49], 00:22:59.558 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:22:59.558 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 82], 95.00th=[ 97], 00:22:59.558 | 99.00th=[ 109], 99.50th=[ 127], 99.90th=[ 171], 99.95th=[ 182], 00:22:59.558 | 99.99th=[ 184] 00:22:59.558 bw ( KiB/s): min=150528, max=351744, per=7.38%, avg=261427.20, stdev=59699.80, samples=20 00:22:59.558 iops : min= 588, max= 1374, avg=1021.20, stdev=233.20, samples=20 00:22:59.558 lat (msec) : 50=25.15%, 100=70.38%, 250=4.47% 00:22:59.558 cpu : usr=2.16%, sys=2.79%, ctx=2207, majf=0, minf=1 00:22:59.558 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:59.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.558 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.558 issued rwts: total=0,10275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.559 job8: (groupid=0, jobs=1): err= 0: pid=1400437: Sun Jun 9 09:01:20 2024 00:22:59.559 write: IOPS=1066, BW=267MiB/s (280MB/s)(2688MiB/10079msec); 0 zone resets 00:22:59.559 slat (usec): min=21, max=60614, avg=914.78, stdev=3343.86 00:22:59.559 clat (msec): min=17, max=183, avg=59.05, stdev=21.60 00:22:59.559 lat (msec): min=17, max=183, avg=59.96, stdev=22.15 00:22:59.559 clat percentiles (msec): 00:22:59.559 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 44], 00:22:59.559 | 30.00th=[ 46], 40.00th=[ 50], 50.00th=[ 54], 60.00th=[ 59], 00:22:59.559 | 70.00th=[ 62], 80.00th=[ 80], 90.00th=[ 95], 95.00th=[ 103], 00:22:59.559 | 99.00th=[ 107], 99.50th=[ 110], 99.90th=[ 171], 99.95th=[ 180], 00:22:59.559 | 99.99th=[ 184] 00:22:59.559 bw ( KiB/s): min=153088, max=471040, per=7.73%, avg=273664.00, stdev=91652.67, samples=20 00:22:59.559 iops : min= 598, max= 1840, avg=1069.00, stdev=358.02, samples=20 00:22:59.559 lat (msec) : 20=0.08%, 50=40.75%, 100=51.88%, 250=7.28% 00:22:59.559 cpu : usr=2.40%, sys=3.30%, ctx=2282, majf=0, minf=1 00:22:59.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:59.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.559 issued rwts: total=0,10753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.559 job9: (groupid=0, jobs=1): err= 0: pid=1400438: Sun Jun 9 09:01:20 2024 00:22:59.559 write: IOPS=1066, BW=267MiB/s (280MB/s)(2687MiB/10079msec); 0 zone resets 00:22:59.559 slat (usec): min=22, max=55190, avg=927.78, stdev=3680.83 00:22:59.559 clat (msec): min=5, max=178, avg=59.05, stdev=20.12 00:22:59.559 lat (msec): min=5, max=178, avg=59.98, stdev=20.68 00:22:59.559 clat percentiles (msec): 00:22:59.559 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 45], 00:22:59.559 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 59], 00:22:59.559 | 70.00th=[ 61], 80.00th=[ 73], 90.00th=[ 88], 95.00th=[ 103], 00:22:59.559 | 99.00th=[ 108], 99.50th=[ 136], 99.90th=[ 171], 99.95th=[ 178], 00:22:59.559 | 99.99th=[ 180] 00:22:59.559 bw ( KiB/s): min=141824, max=473088, per=7.72%, avg=273561.60, stdev=81789.89, samples=20 00:22:59.559 iops : min= 554, max= 1848, avg=1068.60, stdev=319.49, samples=20 00:22:59.559 lat (msec) : 10=0.03%, 20=0.14%, 50=33.76%, 100=58.91%, 250=7.16% 00:22:59.559 cpu : usr=2.13%, sys=3.49%, ctx=2206, majf=0, minf=1 00:22:59.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:59.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.559 issued rwts: total=0,10749,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.559 job10: (groupid=0, jobs=1): err= 0: pid=1400439: Sun Jun 9 09:01:20 2024 00:22:59.559 write: IOPS=1289, BW=322MiB/s (338MB/s)(3230MiB/10022msec); 0 zone resets 00:22:59.559 slat (usec): min=21, max=22315, avg=745.47, stdev=2084.31 00:22:59.559 clat (usec): min=5427, max=88557, avg=48875.60, stdev=9794.43 00:22:59.559 lat (usec): min=5461, max=88606, avg=49621.07, stdev=10085.73 00:22:59.559 clat percentiles (usec): 00:22:59.559 | 1.00th=[27132], 5.00th=[28443], 10.00th=[33817], 20.00th=[43779], 00:22:59.559 | 30.00th=[45351], 40.00th=[47449], 50.00th=[49021], 60.00th=[50594], 00:22:59.559 | 70.00th=[52691], 80.00th=[58983], 90.00th=[61080], 95.00th=[63177], 00:22:59.559 | 99.00th=[67634], 99.50th=[69731], 99.90th=[74974], 99.95th=[83362], 00:22:59.559 | 99.99th=[85459] 00:22:59.559 bw ( KiB/s): min=263680, max=538624, per=9.29%, avg=329196.10, stdev=65671.36, samples=20 00:22:59.559 iops : min= 1030, max= 2104, avg=1285.90, stdev=256.53, samples=20 00:22:59.559 lat (msec) : 10=0.02%, 20=0.09%, 50=54.14%, 100=45.74% 00:22:59.559 cpu : usr=2.65%, sys=3.82%, ctx=2758, majf=0, minf=1 00:22:59.559 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:59.559 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:59.559 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:59.559 issued rwts: total=0,12921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:59.559 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:59.559 00:22:59.559 Run status group 0 (all jobs): 00:22:59.559 WRITE: bw=3459MiB/s (3627MB/s), 220MiB/s-602MiB/s (231MB/s-631MB/s), io=34.1GiB (36.6GB), run=10011-10082msec 00:22:59.559 00:22:59.559 Disk stats (read/write): 00:22:59.559 nvme0n1: ios=49/18889, merge=0/0, ticks=19/1233188, in_queue=1233207, util=97.78% 00:22:59.559 nvme10n1: ios=0/22150, merge=0/0, ticks=0/1235259, in_queue=1235259, util=97.82% 00:22:59.559 nvme11n1: ios=0/17590, merge=0/0, ticks=0/1233325, in_queue=1233325, util=97.92% 00:22:59.559 nvme2n1: ios=0/18458, merge=0/0, ticks=0/1231646, in_queue=1231646, util=98.03% 00:22:59.559 nvme3n1: ios=0/29150, merge=0/0, ticks=0/1228041, in_queue=1228041, util=98.09% 00:22:59.559 nvme4n1: ios=0/48395, merge=0/0, ticks=0/1227292, in_queue=1227292, util=98.36% 00:22:59.559 nvme5n1: ios=0/33729, merge=0/0, ticks=0/1236285, in_queue=1236285, util=98.48% 00:22:59.559 nvme6n1: ios=0/20422, merge=0/0, ticks=0/1232698, in_queue=1232698, util=98.56% 00:22:59.559 nvme7n1: ios=0/21312, merge=0/0, ticks=0/1228702, in_queue=1228702, util=98.87% 00:22:59.559 nvme8n1: ios=0/21370, merge=0/0, ticks=0/1230590, in_queue=1230590, util=99.00% 00:22:59.559 nvme9n1: ios=0/25615, merge=0/0, ticks=0/1236683, in_queue=1236683, util=99.09% 00:22:59.559 09:01:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:22:59.559 09:01:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:22:59.559 09:01:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.559 09:01:20 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:59.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK1 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK1 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:59.559 09:01:21 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:23:00.125 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK2 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK2 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:00.125 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:00.383 09:01:22 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:00.383 09:01:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:00.383 09:01:22 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:23:01.319 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK3 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK3 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:01.319 09:01:23 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:23:01.887 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK4 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK4 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:01.887 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:02.145 09:01:24 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:02.145 09:01:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:02.145 09:01:24 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:23:03.084 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK5 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK5 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.084 09:01:25 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:23:03.675 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK6 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK6 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:03.675 09:01:26 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:23:04.610 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK7 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK7 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:04.610 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:23:05.544 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK8 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK8 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:05.544 09:01:27 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:05.544 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:05.544 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:05.544 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:23:06.476 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK9 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK9 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:06.476 09:01:28 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:23:07.410 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK10 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK10 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:23:07.410 09:01:29 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:23:08.345 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # local i=0 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # grep -q -w SPDK11 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1226 -- # grep -q -w SPDK11 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1230 -- # return 0 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:08.345 rmmod nvme_rdma 00:23:08.345 rmmod nvme_fabrics 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 1393993 ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 1393993 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@949 -- # '[' -z 1393993 ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@953 -- # kill -0 1393993 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # uname 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1393993 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1393993' 00:23:08.345 killing process with pid 1393993 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@968 -- # kill 1393993 00:23:08.345 09:01:30 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@973 -- # wait 1393993 00:23:08.913 09:01:31 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:08.913 00:23:08.913 real 1m3.399s 00:23:08.913 user 4m7.303s 00:23:08.913 sys 0m16.376s 00:23:08.913 09:01:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # xtrace_disable 00:23:08.913 09:01:31 nvmf_rdma.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 ************************************ 00:23:08.913 END TEST nvmf_multiconnection 00:23:08.913 ************************************ 00:23:08.913 09:01:31 nvmf_rdma -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:08.913 09:01:31 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:23:08.913 09:01:31 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:23:08.913 09:01:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:08.913 ************************************ 00:23:08.913 START TEST nvmf_initiator_timeout 00:23:08.913 ************************************ 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:23:08.913 * Looking for test storage... 00:23:08.913 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.913 09:01:31 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:14.173 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:14.173 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # modinfo irdma 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:14.173 Found net devices under 0000:af:00.0: cvl_0_0 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:14.173 Found net devices under 0000:af:00.1: cvl_0_1 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:14.173 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@420 -- # rdma_device_init 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # uname 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:23:14.174 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:14.174 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:23:14.174 altname enp175s0f0np0 00:23:14.174 altname ens801f0np0 00:23:14.174 inet 192.168.100.8/24 scope global cvl_0_0 00:23:14.174 valid_lft forever preferred_lft forever 00:23:14.174 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:23:14.174 valid_lft forever preferred_lft forever 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:23:14.174 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:23:14.174 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:23:14.174 altname enp175s0f1np1 00:23:14.174 altname ens801f1np1 00:23:14.174 inet 192.168.100.9/24 scope global cvl_0_1 00:23:14.174 valid_lft forever preferred_lft forever 00:23:14.174 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:23:14.174 valid_lft forever preferred_lft forever 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@104 -- # echo cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # continue 2 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:14.174 192.168.100.9' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:14.174 192.168.100.9' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # head -n 1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:14.174 192.168.100.9' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # tail -n +2 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # head -n 1 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@723 -- # xtrace_disable 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:14.174 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=1406568 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 1406568 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@830 -- # '[' -z 1406568 ']' 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local max_retries=100 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # xtrace_disable 00:23:14.175 09:01:36 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:14.175 [2024-06-09 09:01:36.555324] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:23:14.175 [2024-06-09 09:01:36.555366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.175 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.175 [2024-06-09 09:01:36.603747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.175 [2024-06-09 09:01:36.681319] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.175 [2024-06-09 09:01:36.681354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.175 [2024-06-09 09:01:36.681360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.175 [2024-06-09 09:01:36.681365] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.175 [2024-06-09 09:01:36.681370] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.175 [2024-06-09 09:01:36.681460] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.175 [2024-06-09 09:01:36.681556] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.175 [2024-06-09 09:01:36.681623] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.175 [2024-06-09 09:01:36.681624] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.110 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:23:15.110 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@863 -- # return 0 00:23:15.110 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.110 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@729 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 Malloc0 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 Delay0 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 [2024-06-09 09:01:37.463426] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1e92250/0x1e91890) succeed. 00:23:15.111 [2024-06-09 09:01:37.472549] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1e93540/0x1e91e10) succeed. 00:23:15.111 [2024-06-09 09:01:37.472570] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:15.111 [2024-06-09 09:01:37.500845] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:15.111 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:23:15.368 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:15.368 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local i=0 00:23:15.368 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local nvme_device_counter=1 nvme_devices=0 00:23:15.368 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # [[ -n '' ]] 00:23:15.368 09:01:37 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # sleep 2 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # (( i++ <= 15 )) 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # grep -c SPDKISFASTANDAWESOME 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # nvme_devices=1 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # (( nvme_devices == nvme_device_counter )) 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # return 0 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1407057 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:17.267 09:01:39 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:17.267 [global] 00:23:17.267 thread=1 00:23:17.267 invalidate=1 00:23:17.267 rw=write 00:23:17.267 time_based=1 00:23:17.267 runtime=60 00:23:17.267 ioengine=libaio 00:23:17.267 direct=1 00:23:17.267 bs=4096 00:23:17.267 iodepth=1 00:23:17.267 norandommap=0 00:23:17.267 numjobs=1 00:23:17.267 00:23:17.267 verify_dump=1 00:23:17.267 verify_backlog=512 00:23:17.267 verify_state_save=0 00:23:17.267 do_verify=1 00:23:17.267 verify=crc32c-intel 00:23:17.267 [job0] 00:23:17.267 filename=/dev/nvme0n1 00:23:17.267 Could not set queue depth (nvme0n1) 00:23:17.524 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:17.525 fio-3.35 00:23:17.525 Starting 1 thread 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 true 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 true 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 true 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.806 true 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:20.806 09:01:42 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:23.329 true 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:23.329 true 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:23.329 true 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:23.329 true 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:23.329 09:01:45 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1407057 00:24:19.666 00:24:19.666 job0: (groupid=0, jobs=1): err= 0: pid=1407174: Sun Jun 9 09:02:40 2024 00:24:19.666 read: IOPS=1320, BW=5280KiB/s (5407kB/s)(309MiB/60000msec) 00:24:19.666 slat (nsec): min=6475, max=46481, avg=7618.59, stdev=899.75 00:24:19.666 clat (usec): min=73, max=397, avg=104.86, stdev= 6.30 00:24:19.666 lat (usec): min=95, max=409, avg=112.48, stdev= 6.37 00:24:19.666 clat percentiles (usec): 00:24:19.666 | 1.00th=[ 94], 5.00th=[ 97], 10.00th=[ 98], 20.00th=[ 100], 00:24:19.666 | 30.00th=[ 102], 40.00th=[ 103], 50.00th=[ 105], 60.00th=[ 106], 00:24:19.666 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 113], 95.00th=[ 115], 00:24:19.666 | 99.00th=[ 119], 99.50th=[ 120], 99.90th=[ 133], 99.95th=[ 157], 00:24:19.666 | 99.99th=[ 330] 00:24:19.666 write: IOPS=1322, BW=5291KiB/s (5418kB/s)(310MiB/60000msec); 0 zone resets 00:24:19.666 slat (usec): min=8, max=11092, avg=10.15, stdev=44.63 00:24:19.666 clat (usec): min=24, max=41746k, avg=629.38, stdev=148189.15 00:24:19.666 lat (usec): min=95, max=41746k, avg=639.53, stdev=148189.15 00:24:19.666 clat percentiles (usec): 00:24:19.666 | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 97], 20.00th=[ 99], 00:24:19.666 | 30.00th=[ 100], 40.00th=[ 102], 50.00th=[ 103], 60.00th=[ 104], 00:24:19.666 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 111], 95.00th=[ 113], 00:24:19.666 | 99.00th=[ 117], 99.50th=[ 119], 99.90th=[ 126], 99.95th=[ 141], 00:24:19.666 | 99.99th=[ 660] 00:24:19.666 bw ( KiB/s): min= 4624, max=18520, per=100.00%, avg=16737.51, stdev=2996.91, samples=37 00:24:19.666 iops : min= 1156, max= 4630, avg=4184.38, stdev=749.23, samples=37 00:24:19.666 lat (usec) : 50=0.01%, 100=23.72%, 250=76.26%, 500=0.02%, 750=0.01% 00:24:19.666 lat (usec) : 1000=0.01% 00:24:19.666 lat (msec) : 2=0.01%, >=2000=0.01% 00:24:19.666 cpu : usr=1.66%, sys=2.90%, ctx=158574, majf=0, minf=108 00:24:19.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:19.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.666 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.666 issued rwts: total=79206,79360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:19.666 00:24:19.666 Run status group 0 (all jobs): 00:24:19.666 READ: bw=5280KiB/s (5407kB/s), 5280KiB/s-5280KiB/s (5407kB/s-5407kB/s), io=309MiB (324MB), run=60000-60000msec 00:24:19.666 WRITE: bw=5291KiB/s (5418kB/s), 5291KiB/s-5291KiB/s (5418kB/s-5418kB/s), io=310MiB (325MB), run=60000-60000msec 00:24:19.666 00:24:19.666 Disk stats (read/write): 00:24:19.666 nvme0n1: ios=79219/78895, merge=0/0, ticks=7814/7551, in_queue=15365, util=99.54% 00:24:19.666 09:02:40 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:19.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # local i=0 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1226 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1230 -- # return 0 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:19.666 nvmf hotplug test: fio successful as expected 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:19.666 rmmod nvme_rdma 00:24:19.666 rmmod nvme_fabrics 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 1406568 ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@949 -- # '[' -z 1406568 ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # kill -0 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # uname 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1406568' 00:24:19.666 killing process with pid 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # kill 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # wait 1406568 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:19.666 00:24:19.666 real 1m10.114s 00:24:19.666 user 4m25.908s 00:24:19.666 sys 0m6.195s 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # xtrace_disable 00:24:19.666 09:02:41 nvmf_rdma.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:24:19.666 ************************************ 00:24:19.666 END TEST nvmf_initiator_timeout 00:24:19.666 ************************************ 00:24:19.666 09:02:41 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:24:19.666 09:02:41 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:24:19.666 09:02:41 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:24:19.666 09:02:41 nvmf_rdma -- nvmf/nvmf.sh@79 -- # run_test nvmf_device_removal test/nvmf/target/device_removal.sh --transport=rdma 00:24:19.666 09:02:41 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:19.666 09:02:41 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:19.666 09:02:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:19.666 ************************************ 00:24:19.666 START TEST nvmf_device_removal 00:24:19.666 ************************************ 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1124 -- # test/nvmf/target/device_removal.sh --transport=rdma 00:24:19.666 * Looking for test storage... 00:24:19.666 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@10 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@34 -- # set -e 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output ']' 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh ]] 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/build_config.sh 00:24:19.666 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@70 -- # CONFIG_FC=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/build_config.sh@83 -- # CONFIG_URING=n 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/applications.sh 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/include/spdk/config.h ]] 00:24:19.667 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:19.667 #define SPDK_CONFIG_H 00:24:19.667 #define SPDK_CONFIG_APPS 1 00:24:19.667 #define SPDK_CONFIG_ARCH native 00:24:19.667 #undef SPDK_CONFIG_ASAN 00:24:19.667 #undef SPDK_CONFIG_AVAHI 00:24:19.667 #undef SPDK_CONFIG_CET 00:24:19.667 #define SPDK_CONFIG_COVERAGE 1 00:24:19.667 #define SPDK_CONFIG_CROSS_PREFIX 00:24:19.667 #undef SPDK_CONFIG_CRYPTO 00:24:19.667 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:19.667 #undef SPDK_CONFIG_CUSTOMOCF 00:24:19.667 #undef SPDK_CONFIG_DAOS 00:24:19.667 #define SPDK_CONFIG_DAOS_DIR 00:24:19.667 #define SPDK_CONFIG_DEBUG 1 00:24:19.667 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:19.667 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build 00:24:19.667 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:19.667 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:19.667 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:19.667 #undef SPDK_CONFIG_DPDK_UADK 00:24:19.667 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/lib/env_dpdk 00:24:19.667 #define SPDK_CONFIG_EXAMPLES 1 00:24:19.667 #undef SPDK_CONFIG_FC 00:24:19.667 #define SPDK_CONFIG_FC_PATH 00:24:19.668 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:19.668 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:19.668 #undef SPDK_CONFIG_FUSE 00:24:19.668 #undef SPDK_CONFIG_FUZZER 00:24:19.668 #define SPDK_CONFIG_FUZZER_LIB 00:24:19.668 #undef SPDK_CONFIG_GOLANG 00:24:19.668 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:19.668 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:24:19.668 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:19.668 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:24:19.668 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:19.668 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:19.668 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:19.668 #define SPDK_CONFIG_IDXD 1 00:24:19.668 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:19.668 #undef SPDK_CONFIG_IPSEC_MB 00:24:19.668 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:19.668 #define SPDK_CONFIG_ISAL 1 00:24:19.668 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:19.668 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:19.668 #define SPDK_CONFIG_LIBDIR 00:24:19.668 #undef SPDK_CONFIG_LTO 00:24:19.668 #define SPDK_CONFIG_MAX_LCORES 00:24:19.668 #define SPDK_CONFIG_NVME_CUSE 1 00:24:19.668 #undef SPDK_CONFIG_OCF 00:24:19.668 #define SPDK_CONFIG_OCF_PATH 00:24:19.668 #define SPDK_CONFIG_OPENSSL_PATH 00:24:19.668 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:19.668 #define SPDK_CONFIG_PGO_DIR 00:24:19.668 #undef SPDK_CONFIG_PGO_USE 00:24:19.668 #define SPDK_CONFIG_PREFIX /usr/local 00:24:19.668 #undef SPDK_CONFIG_RAID5F 00:24:19.668 #undef SPDK_CONFIG_RBD 00:24:19.668 #define SPDK_CONFIG_RDMA 1 00:24:19.668 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:19.668 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:19.668 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:19.668 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:19.668 #define SPDK_CONFIG_SHARED 1 00:24:19.668 #undef SPDK_CONFIG_SMA 00:24:19.668 #define SPDK_CONFIG_TESTS 1 00:24:19.668 #undef SPDK_CONFIG_TSAN 00:24:19.668 #define SPDK_CONFIG_UBLK 1 00:24:19.668 #define SPDK_CONFIG_UBSAN 1 00:24:19.668 #undef SPDK_CONFIG_UNIT_TESTS 00:24:19.668 #undef SPDK_CONFIG_URING 00:24:19.668 #define SPDK_CONFIG_URING_PATH 00:24:19.668 #undef SPDK_CONFIG_URING_ZNS 00:24:19.668 #undef SPDK_CONFIG_USDT 00:24:19.668 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:19.668 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:19.668 #undef SPDK_CONFIG_VFIO_USER 00:24:19.668 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:19.668 #define SPDK_CONFIG_VHOST 1 00:24:19.668 #define SPDK_CONFIG_VIRTIO 1 00:24:19.668 #undef SPDK_CONFIG_VTUNE 00:24:19.668 #define SPDK_CONFIG_VTUNE_DIR 00:24:19.668 #define SPDK_CONFIG_WERROR 1 00:24:19.668 #define SPDK_CONFIG_WPDK_DIR 00:24:19.668 #undef SPDK_CONFIG_XNVME 00:24:19.668 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/common 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/perf/pm/../../../ 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@64 -- # TEST_TAG=N/A 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/.run_test_name 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # uname -s 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@68 -- # PM_OS=Linux 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[0]= 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@76 -- # SUDO[1]='sudo -E' 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ Linux == Linux ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/power ]] 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@58 -- # : 1 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@62 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@64 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@66 -- # : 1 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@68 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@70 -- # : 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@72 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@74 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@76 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@78 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@80 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@82 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@84 -- # : 0 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@86 -- # : 1 00:24:19.668 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@88 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@90 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@92 -- # : 1 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@94 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@96 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@98 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@100 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@102 -- # : rdma 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@104 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@106 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@108 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@110 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@112 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@114 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@116 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@118 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@120 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@122 -- # : 1 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@124 -- # : 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@126 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@128 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@130 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@132 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@134 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@136 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@138 -- # : 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@140 -- # : true 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@142 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@144 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@146 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@148 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@150 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@152 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@154 -- # : e810 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@156 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@158 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@160 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@162 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@164 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@167 -- # : 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@169 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@171 -- # : 0 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/python 00:24:19.669 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@200 -- # cat 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # export valgrind= 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@263 -- # valgrind= 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # uname -s 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@279 -- # MAKE=make 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@299 -- # TEST_MODE= 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@300 -- # for i in "$@" 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@301 -- # case "$i" in 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=rdma 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # [[ -z 1417040 ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@318 -- # kill -0 1417040 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1679 -- # set_test_storage 2147483648 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@331 -- # local mount target_dir 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.Z8lLOw 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Z8lLOw/tests/target /tmp/spdk.Z8lLOw 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # df -T 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:24:19.670 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=900243456 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4384186368 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=89930436608 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=95562735616 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=5632299008 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=47771447296 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781367808 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=19089510400 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=19112550400 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=23040000 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=47780921344 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=47781367808 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=446464 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # avails["$mount"]=9556267008 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@362 -- # sizes["$mount"]=9556271104 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:24:19.671 * Looking for test storage... 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@368 -- # local target_space new_size 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@372 -- # mount=/ 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@374 -- # target_space=89930436608 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@381 -- # new_size=7846891520 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.671 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@389 -- # return 0 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1681 -- # set -o errtrace 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1682 -- # shopt -s extdebug 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1683 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1685 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1686 -- # true 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1688 -- # xtrace_fd 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@27 -- # exec 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@29 -- # exec 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@18 -- # set -x 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@11 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # uname -s 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.671 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@5 -- # export PATH 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@47 -- # : 0 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@13 -- # tgt_core_mask=0x3 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@14 -- # bdevperf_core_mask=0x4 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@15 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@16 -- # bdevperf_rpc_pid=-1 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@18 -- # nvmftestinit 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.672 09:02:41 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # pci_devs=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # net_devs=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # e810=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@296 -- # local -ga e810 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # x722=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@297 -- # local -ga x722 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # mlx=() 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@298 -- # local -ga mlx 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:24.939 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:24.939 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:24:24.939 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@377 -- # modinfo irdma 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:24.940 Found net devices under 0000:af:00.0: cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:24.940 Found net devices under 0000:af:00.1: cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@414 -- # is_hw=yes 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@420 -- # rdma_device_init 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # uname 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:24:24.940 8: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:24.940 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:24:24.940 altname enp175s0f0np0 00:24:24.940 altname ens801f0np0 00:24:24.940 inet 192.168.100.8/24 scope global cvl_0_0 00:24:24.940 valid_lft forever preferred_lft forever 00:24:24.940 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:24:24.940 valid_lft forever preferred_lft forever 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:24:24.940 9: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:24:24.940 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:24:24.940 altname enp175s0f1np1 00:24:24.940 altname ens801f1np1 00:24:24.940 inet 192.168.100.9/24 scope global cvl_0_1 00:24:24.940 valid_lft forever preferred_lft forever 00:24:24.940 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:24:24.940 valid_lft forever preferred_lft forever 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@422 -- # return 0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@104 -- # echo cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@105 -- # continue 2 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:24.940 192.168.100.9' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:24.940 192.168.100.9' 00:24:24.940 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # head -n 1 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:24.941 192.168.100.9' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # tail -n +2 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # head -n 1 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@235 -- # BOND_NAME=bond_nvmf 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@236 -- # BOND_IP=10.11.11.26 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@237 -- # BOND_MASK=24 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@311 -- # run_test nvmf_device_removal_pci_remove_no_srq test_remove_and_rescan --no-srq 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:24:24.941 ************************************ 00:24:24.941 START TEST nvmf_device_removal_pci_remove_no_srq 00:24:24.941 ************************************ 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1124 -- # test_remove_and_rescan --no-srq 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@723 -- # xtrace_disable 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@481 -- # nvmfpid=1420042 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@482 -- # waitforlisten 1420042 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 1420042 ']' 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:24.941 09:02:46 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:24.941 [2024-06-09 09:02:46.865818] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:24:24.941 [2024-06-09 09:02:46.865876] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.941 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.941 [2024-06-09 09:02:46.921581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:24.941 [2024-06-09 09:02:46.994407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.941 [2024-06-09 09:02:46.994447] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.941 [2024-06-09 09:02:46.994453] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.941 [2024-06-09 09:02:46.994458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.941 [2024-06-09 09:02:46.994463] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.941 [2024-06-09 09:02:46.994560] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.941 [2024-06-09 09:02:46.994562] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@729 -- # xtrace_disable 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@130 -- # create_subsystem_and_connect --no-srq 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 --no-srq 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.200 [2024-06-09 09:02:47.714185] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x24bf2d0/0x24be910) succeed. 00:24:25.200 [2024-06-09 09:02:47.722720] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x24c0580/0x24bee90) succeed. 00:24:25.200 [2024-06-09 09:02:47.722747] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # get_rdma_if_list 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo cvl_0_0 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:24:25.200 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:25.201 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:24:25.201 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@104 -- # echo cvl_0_1 00:24:25.201 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@105 -- # continue 2 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_0 -a -s SPDK000cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_0 cvl_0_0 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.460 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 [2024-06-09 09:02:47.835337] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_0 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@25 -- # local -a dev_name 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@27 -- # dev_name=cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@28 -- # malloc_name=cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # get_ip_address cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_1 -a -s SPDK000cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_1 cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:25.461 [2024-06-09 09:02:47.909741] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@41 -- # return 0 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@53 -- # return 0 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf cvl_0_0 cvl_0_1 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # dev_names=('cvl_0_0' 'cvl_0_1') 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@87 -- # local dev_names 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@91 -- # bdevperf_pid=1420304 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@94 -- # waitforlisten 1420304 /var/tmp/bdevperf.sock 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@830 -- # '[' -z 1420304 ']' 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@835 -- # local max_retries=100 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@839 -- # xtrace_disable 00:24:25.461 09:02:47 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@863 -- # return 0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_0 -l -1 -o 1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:26.399 Nvme_cvl_0_0n1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # get_ip_address cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_1 -l -1 -o 1 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:26.399 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:26.658 Nvme_cvl_0_1n1 00:24:26.658 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:26.658 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=1420537 00:24:26.658 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@112 -- # sleep 5 00:24:26.658 09:02:48 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_0 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_0 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=cvl_0_0 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_0 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:24:31.932 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/infiniband 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f0 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address cvl_0_0 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:31.933 09:02:53 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:31.933 rocep175s0f0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:24:31.933 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:24:31.933 [2024-06-09 09:02:54.070005] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device rocep175s0f0 is being removed. 00:24:31.933 [2024-06-09 09:02:54.070263] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:24:31.933 [2024-06-09 09:02:54.070870] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:24:31.933 [2024-06-09 09:02:54.070887] rdma.c: 859:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 95 00:24:31.933 [2024-06-09 09:02:54.070893] rdma.c: 646:nvmf_rdma_dump_qpair_contents: *ERROR*: Dumping contents of queue pair (QID 1) 00:24:31.933 [2024-06-09 09:02:54.070900] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.070905] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.070910] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.070915] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.070921] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.070927] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.070932] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.070937] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.070942] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.070947] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.070953] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.070957] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.070962] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.070967] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.070972] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.070977] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.070982] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.070994] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.070999] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071004] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071009] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071015] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071020] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071024] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071030] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071036] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071041] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071046] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071053] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071058] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071063] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071068] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071073] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071078] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071084] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071089] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071095] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071100] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071105] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071110] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071115] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071120] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071125] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071130] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071135] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071139] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071144] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071149] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071154] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.933 [2024-06-09 09:02:54.071159] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.933 [2024-06-09 09:02:54.071163] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071168] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071173] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.933 [2024-06-09 09:02:54.071178] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.933 [2024-06-09 09:02:54.071182] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071187] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071192] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071197] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071201] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071208] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071213] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071218] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071222] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071227] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071232] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071236] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071241] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071247] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071252] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071256] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071262] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071267] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071272] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071277] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071282] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071288] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071292] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071297] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071303] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071307] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071312] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071317] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071322] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071327] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071332] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071337] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071342] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071347] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071352] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071357] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071362] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071366] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071372] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071376] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071381] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071386] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071391] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071396] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071400] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071405] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071410] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071415] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071420] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071425] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071431] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071436] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071441] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071445] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071450] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071455] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071460] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071464] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071469] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071474] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071479] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071484] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071488] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071493] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071498] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071502] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071507] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071513] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071517] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071523] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071527] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071532] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071537] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071542] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071547] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071552] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071556] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071561] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071566] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071571] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071576] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071581] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071586] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071591] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071596] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071601] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071607] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071612] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071619] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071624] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071629] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071634] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071639] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.934 [2024-06-09 09:02:54.071644] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.934 [2024-06-09 09:02:54.071649] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071654] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.934 [2024-06-09 09:02:54.071659] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.934 [2024-06-09 09:02:54.071663] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071669] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071674] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071679] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071684] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071689] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071694] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071699] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071704] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071709] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071713] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071718] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071729] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071735] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071740] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071745] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071751] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071757] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071762] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071767] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071772] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071777] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071782] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071787] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071792] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071798] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071803] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071809] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071814] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071819] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 0 00:24:31.935 [2024-06-09 09:02:54.071824] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 1 00:24:31.935 [2024-06-09 09:02:54.071829] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071834] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071841] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071846] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071851] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071856] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:31.935 [2024-06-09 09:02:54.071861] rdma.c: 632:nvmf_rdma_dump_request: *ERROR*: Request Data From Pool: 1 00:24:31.935 [2024-06-09 09:02:54.071865] rdma.c: 634:nvmf_rdma_dump_request: *ERROR*: Request opcode: 2 00:24:32.194 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:24:32.453 09:02:54 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:24:33.021 [2024-06-09 09:02:55.356801] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x25de880/0x24b7970) succeed. 00:24:33.021 [2024-06-09 09:02:55.356871] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:24:33.021 [2024-06-09 09:02:55.356887] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: port active 00:24:33.280 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/net 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_0 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z cvl_0_0 ]] 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ cvl_0_0 != \c\v\l\_\0\_\0 ]] 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z cvl_0_0 ]] 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set cvl_0_0 up 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address cvl_0_0 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:24:33.281 [2024-06-09 09:02:55.831703] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:33.281 [2024-06-09 09:02:55.831743] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:24:33.281 [2024-06-09 09:02:55.831752] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:24:33.281 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@71 -- # dev_name=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/infiniband 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # get_ip_address cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:33.541 rocep175s0f1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 0 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@66 -- # dev_name=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # echo 1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:24:33.541 09:02:55 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:24:33.541 [2024-06-09 09:02:55.971989] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:24:33.541 [2024-06-09 09:02:55.973053] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:24:33.541 [2024-06-09 09:02:55.974512] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device rocep175s0f1 is being removed. 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # seq 1 10 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@78 -- # return 1 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@149 -- # break 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:24:34.109 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:34.110 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:34.368 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:34.368 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:24:34.368 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@160 -- # rescan_pci 00:24:34.368 09:02:56 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@57 -- # echo 1 00:24:34.629 [2024-06-09 09:02:57.180375] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x2719dd0/0x24abb60) succeed. 00:24:34.629 [2024-06-09 09:02:57.180432] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:24:34.629 [2024-06-09 09:02:57.180449] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: port active 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # seq 1 10 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/net 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_1 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@164 -- # [[ -z cvl_0_1 ]] 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@166 -- # [[ cvl_0_1 != \c\v\l\_\0\_\1 ]] 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@171 -- # break 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@175 -- # [[ -z cvl_0_1 ]] 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@179 -- # ip link set cvl_0_1 up 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # get_ip_address cvl_0_1 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:24:35.198 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # seq 1 10 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:24:35.199 [2024-06-09 09:02:57.642147] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:35.199 [2024-06-09 09:02:57.642173] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:24:35.199 [2024-06-09 09:02:57.642182] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@187 -- # ib_count=2 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@189 -- # break 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@200 -- # stop_bdevperf 00:24:35.199 09:02:57 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@116 -- # wait 1420537 00:26:11.726 0 00:26:11.726 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@118 -- # killprocess 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 1420304 ']' 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1420304' 00:26:11.727 killing process with pid 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 1420304 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@119 -- # bdevperf_pid= 00:26:11.727 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:26:11.727 [2024-06-09 09:02:47.963738] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:26:11.727 [2024-06-09 09:02:47.963781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420304 ] 00:26:11.727 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.727 [2024-06-09 09:02:48.012032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.727 [2024-06-09 09:02:48.083671] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.727 Running I/O for 90 seconds... 00:26:11.727 [2024-06-09 09:02:54.072278] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:11.727 [2024-06-09 09:02:54.072306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.727 [2024-06-09 09:02:54.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.072325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.727 [2024-06-09 09:02:54.072333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.072339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.727 [2024-06-09 09:02:54.072346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.072353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.727 [2024-06-09 09:02:54.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:16 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.072662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.727 [2024-06-09 09:02:54.072672] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:11.727 [2024-06-09 09:02:54.072692] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:11.727 [2024-06-09 09:02:54.073813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:205400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007796000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:205408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007798000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:205416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779a000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:205424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779c000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:205432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000779e000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:205440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a0000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:205448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a2000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:205456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a4000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:205464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a6000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:205472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077a8000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:205480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077aa000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.073988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:205488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ac000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.073995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:205496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ae000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:205504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b0000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:205512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b2000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:205520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b4000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:205528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b6000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:205536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077b8000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:205544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ba000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:205552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077bc000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:205560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077be000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:205568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c0000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:205576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c2000 len:0x1000 key:0x46eeec97 00:26:11.727 [2024-06-09 09:02:54.074159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.727 [2024-06-09 09:02:54.074167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:205584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c4000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:205592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c6000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:205600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077c8000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:205608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ca000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:205616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077cc000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:205624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ce000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:205632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d0000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:205640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d2000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:205648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d4000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:205656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d6000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:205664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077d8000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:205672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077da000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:205680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077dc000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:205688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077de000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:205696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e0000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:205704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e2000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:205712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e4000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:205720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e6000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:205728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077e8000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:205736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ea000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:205744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ec000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:205752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077ee000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:205760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f0000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:205768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f2000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:205776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:205784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:205792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:205800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:205808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:205816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x46eeec97 00:26:11.728 [2024-06-09 09:02:54.074590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:205824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.728 [2024-06-09 09:02:54.074606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:205832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.728 [2024-06-09 09:02:54.074620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.728 [2024-06-09 09:02:54.074628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:205840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.728 [2024-06-09 09:02:54.074634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:205848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:205856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:205864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:205872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:205880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:205888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:205896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:205904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:205912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:205920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:205928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:205936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:205944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:205952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:205960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:205968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:205976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:205984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:205992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:206000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:206008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:206016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:206024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:206032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.074991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.074999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:206040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:206048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:206056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:206064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:206072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:206080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:206088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:206096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:206104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:206112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:206120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:206128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:206136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:206144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.729 [2024-06-09 09:02:54.075200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.729 [2024-06-09 09:02:54.075208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:206152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:206160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:206168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:206176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:206184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:206192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:206200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:206208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:206216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:206224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:206232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:206240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:206248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:206256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:206264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:206272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:206280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:206288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:206296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:206304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:206312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.075508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.075517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:206320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:206328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:206336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:206344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:206352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:206360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.079987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.079995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:206368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.080012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:206376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.080025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:206384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.080040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:206392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.080053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:206400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.080067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:206408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.730 [2024-06-09 09:02:54.080074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.092785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.730 [2024-06-09 09:02:54.092798] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.730 [2024-06-09 09:02:54.092805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:206416 len:8 PRP1 0x0 PRP2 0x0 00:26:11.730 [2024-06-09 09:02:54.092812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.730 [2024-06-09 09:02:54.094897] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:11.730 [2024-06-09 09:02:54.095205] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:26:11.731 [2024-06-09 09:02:54.095220] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:11.731 [2024-06-09 09:02:54.095226] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:11.731 [2024-06-09 09:02:54.095240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.731 [2024-06-09 09:02:54.095247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:11.731 [2024-06-09 09:02:54.095256] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:26:11.731 [2024-06-09 09:02:54.095263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:26:11.731 [2024-06-09 09:02:54.095272] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:26:11.731 [2024-06-09 09:02:54.095288] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.731 [2024-06-09 09:02:54.095296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:11.731 [2024-06-09 09:02:55.967655] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:11.731 [2024-06-09 09:02:55.967687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.731 [2024-06-09 09:02:55.967700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.967708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.731 [2024-06-09 09:02:55.967714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.967721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.731 [2024-06-09 09:02:55.967744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.967751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.731 [2024-06-09 09:02:55.967758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32562 cdw0:6 sqhd:a3b9 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.971562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.731 [2024-06-09 09:02:55.971586] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:11.731 [2024-06-09 09:02:55.971718] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:11.731 [2024-06-09 09:02:55.972460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:214656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:214664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:214672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:214680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:214688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:214696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:214704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:214712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:214720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:214728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:214736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:214744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:214752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:214760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:214768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:214776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:214784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:214792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:214800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:214808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.731 [2024-06-09 09:02:55.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:214816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.731 [2024-06-09 09:02:55.972782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:214824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:214832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:214840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:214848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:214856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:214864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:214872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:214880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:214888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:214896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:214904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:214912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:214920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:214928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.972987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:214936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.972993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:214944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:214952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:214960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:214968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:214976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:214984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:214992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:215000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:215008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:215016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:215024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:215032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.732 [2024-06-09 09:02:55.973162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:214016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:214024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:214032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:214040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:214048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:214056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:214064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:214072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:214080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.732 [2024-06-09 09:02:55.973303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:214088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0xf3b695b7 00:26:11.732 [2024-06-09 09:02:55.973309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:214096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:214104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:214112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:214120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:214128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:214136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:214144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:214152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:214160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:214168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:214176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:214184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:214192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:214200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:214208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:214216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:214224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:214232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:214240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:214248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:214256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:214264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:214272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:214280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:214288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:214296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:214304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:214312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:214320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:214328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:214336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:214344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.733 [2024-06-09 09:02:55.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:214352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0xf3b695b7 00:26:11.733 [2024-06-09 09:02:55.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:214360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:214368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:214376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:214384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:214392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:214400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:214408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:214416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:214424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:214432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:214440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:214448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:214456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.973987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.973995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:214464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:214472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:214480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:214488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:214496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:214504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:214512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:214520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:214528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:214536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:214544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:214552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:214560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:214568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:214576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007972000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:214584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007970000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:214592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796e000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:214600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796c000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:214608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000796a000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:214616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007968000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:214624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007966000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:214632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007964000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.974312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:214640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007962000 len:0x1000 key:0xf3b695b7 00:26:11.734 [2024-06-09 09:02:55.974318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32562 cdw0:94caba10 sqhd:d540 p:0 m:0 dnr:0 00:26:11.734 [2024-06-09 09:02:55.987287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.735 [2024-06-09 09:02:55.987301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.735 [2024-06-09 09:02:55.987308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:214648 len:8 PRP1 0x0 PRP2 0x0 00:26:11.735 [2024-06-09 09:02:55.987315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.735 [2024-06-09 09:02:55.987358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:11.735 [2024-06-09 09:02:55.987647] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:26:11.735 [2024-06-09 09:02:55.987659] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:11.735 [2024-06-09 09:02:55.987665] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:26:11.735 [2024-06-09 09:02:55.987679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.735 [2024-06-09 09:02:55.987687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:11.735 [2024-06-09 09:02:55.987698] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:26:11.735 [2024-06-09 09:02:55.987704] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:26:11.735 [2024-06-09 09:02:55.987712] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:26:11.735 [2024-06-09 09:02:55.987739] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.735 [2024-06-09 09:02:55.987746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:11.735 [2024-06-09 09:02:56.100352] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:11.735 [2024-06-09 09:02:56.100365] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:26:11.735 [2024-06-09 09:02:56.100382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.735 [2024-06-09 09:02:56.100389] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:26:11.735 [2024-06-09 09:02:56.100397] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:26:11.735 [2024-06-09 09:02:56.100403] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:26:11.735 [2024-06-09 09:02:56.100409] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:26:11.735 [2024-06-09 09:02:56.100424] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.735 [2024-06-09 09:02:56.100429] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:26:11.735 [2024-06-09 09:02:57.196514] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.735 [2024-06-09 09:02:57.992762] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:26:11.735 [2024-06-09 09:02:57.992781] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:26:11.735 [2024-06-09 09:02:57.992804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:26:11.735 [2024-06-09 09:02:57.992811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:26:11.735 [2024-06-09 09:02:57.992822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:26:11.735 [2024-06-09 09:02:57.992828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:26:11.735 [2024-06-09 09:02:57.992835] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:26:11.735 [2024-06-09 09:02:57.992852] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.735 [2024-06-09 09:02:57.992859] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:26:11.735 [2024-06-09 09:02:59.048076] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:11.735 00:26:11.735 Latency(us) 00:26:11.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.735 Job: Nvme_cvl_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:11.735 Verification LBA range: start 0x0 length 0x8000 00:26:11.735 Nvme_cvl_0_0n1 : 90.00 11352.99 44.35 0.00 0.00 11253.94 2262.55 4042510.14 00:26:11.735 Job: Nvme_cvl_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:11.735 Verification LBA range: start 0x0 length 0x8000 00:26:11.735 Nvme_cvl_0_1n1 : 90.01 11325.39 44.24 0.00 0.00 11282.05 1919.27 4042510.14 00:26:11.735 =================================================================================================================== 00:26:11.735 Total : 22678.38 88.59 0.00 0.00 11267.98 1919.27 4042510.14 00:26:11.735 Received shutdown signal, test time was about 90.000000 seconds 00:26:11.735 00:26:11.735 Latency(us) 00:26:11.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.735 =================================================================================================================== 00:26:11.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@202 -- # killprocess 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@949 -- # '[' -z 1420042 ']' 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@953 -- # kill -0 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # uname 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1420042' 00:26:11.735 killing process with pid 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@968 -- # kill 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@973 -- # wait 1420042 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@203 -- # nvmfpid= 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- target/device_removal.sh@205 -- # return 0 00:26:11.735 00:26:11.735 real 1m33.076s 00:26:11.735 user 4m37.073s 00:26:11.735 sys 0m1.682s 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@1125 -- # xtrace_disable 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove_no_srq -- common/autotest_common.sh@10 -- # set +x 00:26:11.735 ************************************ 00:26:11.735 END TEST nvmf_device_removal_pci_remove_no_srq 00:26:11.735 ************************************ 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@312 -- # run_test nvmf_device_removal_pci_remove test_remove_and_rescan 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1106 -- # xtrace_disable 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:26:11.735 ************************************ 00:26:11.735 START TEST nvmf_device_removal_pci_remove 00:26:11.735 ************************************ 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1124 -- # test_remove_and_rescan 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@128 -- # nvmfappstart -m 0x3 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@723 -- # xtrace_disable 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@481 -- # nvmfpid=1435819 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@482 -- # waitforlisten 1435819 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 1435819 ']' 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:11.735 09:04:19 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 [2024-06-09 09:04:20.000046] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:26:11.736 [2024-06-09 09:04:20.000086] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:11.736 EAL: No free 2048 kB hugepages reported on node 1 00:26:11.736 [2024-06-09 09:04:20.059492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:11.736 [2024-06-09 09:04:20.143957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:11.736 [2024-06-09 09:04:20.144001] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:11.736 [2024-06-09 09:04:20.144012] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:11.736 [2024-06-09 09:04:20.144018] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:11.736 [2024-06-09 09:04:20.144023] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:11.736 [2024-06-09 09:04:20.144060] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.736 [2024-06-09 09:04:20.144063] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@729 -- # xtrace_disable 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@130 -- # create_subsystem_and_connect 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@45 -- # local -gA netdev_nvme_dict 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@46 -- # netdev_nvme_dict=() 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@48 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 [2024-06-09 09:04:20.854830] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1e9c2d0/0x1e9b910) succeed. 00:26:11.736 [2024-06-09 09:04:20.863322] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1e9d580/0x1e9be90) succeed. 00:26:11.736 [2024-06-09 09:04:20.863345] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # get_rdma_if_list 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@104 -- # echo cvl_0_1 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@105 -- # continue 2 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.8 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_0 -a -s SPDK000cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_0 cvl_0_0 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.736 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 [2024-06-09 09:04:20.985593] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_0 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@49 -- # for net_dev in $(get_rdma_if_list) 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # create_subsystem_and_connect_on_netdev cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@25 -- # local -a dev_name 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@27 -- # dev_name=cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@28 -- # malloc_name=cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # get_subsystem_nqn cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@29 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # get_ip_address cvl_0_1 00:26:11.737 09:04:20 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@30 -- # ip=192.168.100.9 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@31 -- # serial=SPDK000cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@33 -- # MALLOC_BDEV_SIZE=128 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@34 -- # MALLOC_BLOCK_SIZE=512 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@36 -- # rpc_cmd bdev_malloc_create 128 512 -b cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:system_cvl_0_1 -a -s SPDK000cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:system_cvl_0_1 cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:system_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 [2024-06-09 09:04:21.064527] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@41 -- # return 0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@50 -- # netdev_nvme_dict[$net_dev]=cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@53 -- # return 0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@132 -- # generate_io_traffic_with_bdevperf cvl_0_0 cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # dev_names=('cvl_0_0' 'cvl_0_1') 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@87 -- # local dev_names 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@89 -- # mkdir -p /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@91 -- # bdevperf_pid=1435984 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@93 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; kill -9 $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@90 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@94 -- # waitforlisten 1435984 /var/tmp/bdevperf.sock 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@830 -- # '[' -z 1435984 ']' 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@835 -- # local max_retries=100 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:11.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@839 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@863 -- # return 0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.8 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_0 -l -1 -o 1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.737 Nvme_cvl_0_0n1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@100 -- # for dev_name in "${dev_names[@]}" 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # get_subsystem_nqn cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@21 -- # echo nqn.2016-06.io.spdk:system_cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@101 -- # nqn=nqn.2016-06.io.spdk:system_cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # get_ip_address cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@102 -- # tgt_ip=192.168.100.9 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme_cvl_0_1 -t rdma -a 192.168.100.9 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:system_cvl_0_1 -l -1 -o 1 00:26:11.737 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.738 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.738 Nvme_cvl_0_1n1 00:26:11.738 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.738 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@110 -- # bdevperf_rpc_pid=1436091 00:26:11.738 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@112 -- # sleep 5 00:26:11.738 09:04:21 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@109 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/infiniband 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.8 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.738 rocep175s0f0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_0 00:26:11.738 09:04:26 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.0/net/cvl_0_0/device 00:26:11.738 [2024-06-09 09:04:26.636013] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.8:4420 on device rocep175s0f0 is being removed. 00:26:11.738 [2024-06-09 09:04:26.636278] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:26:11.738 [2024-06-09 09:04:26.636437] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f0 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f0 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f0 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:26:11.738 09:04:27 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:26:11.738 [2024-06-09 09:04:27.918311] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x1fbb880/0x1e94970) succeed. 00:26:11.738 [2024-06-09 09:04:27.918384] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.8:4420 is still failed(-1) to listen. 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.0/net 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_0 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z cvl_0_0 ]] 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ cvl_0_0 != \c\v\l\_\0\_\0 ]] 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z cvl_0_0 ]] 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set cvl_0_0 up 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address cvl_0_0 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.8/24 dev cvl_0_0 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:11.738 [2024-06-09 09:04:28.413375] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:11.738 [2024-06-09 09:04:28.413406] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.8:4420 come back 00:26:11.738 [2024-06-09 09:04:28.413416] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.738 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@134 -- # for net_dev in "${!netdev_nvme_dict[@]}" 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@135 -- # nvme_dev=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # get_rdma_device_name cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@71 -- # dev_name=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # get_pci_dir cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@72 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/infiniband 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@136 -- # rdma_dev_name=rocep175s0f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # get_ip_address cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@137 -- # origin_ip=192.168.100.9 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # get_pci_dir cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@138 -- # pci_dir=/sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@140 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.739 rocep175s0f1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 0 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@145 -- # remove_one_nic cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@66 -- # dev_name=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # echo 1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@67 -- # get_pci_dir cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@61 -- # dev_name=cvl_0_1 00:26:11.739 09:04:28 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@62 -- # readlink -f /sys/bus/pci/devices/0000:af:00.1/net/cvl_0_1/device 00:26:11.739 [2024-06-09 09:04:28.551540] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:11.739 [2024-06-09 09:04:28.552533] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:26:11.739 [2024-06-09 09:04:28.556214] rdma.c:3574:nvmf_rdma_handle_device_removal: *NOTICE*: Port 192.168.100.9:4420 on device rocep175s0f1 is being removed. 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # seq 1 10 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@147 -- # for i in $(seq 1 10) 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@148 -- # check_rdma_dev_exists_in_nvmf_tgt rocep175s0f1 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@76 -- # local rdma_dev_name=rocep175s0f1 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # jq -r '.poll_groups[0].transports[].devices[].name' 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@77 -- # grep rocep175s0f1 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@78 -- # return 1 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@149 -- # break 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@158 -- # ib_count_after_remove=1 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@160 -- # rescan_pci 00:26:11.739 09:04:29 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@57 -- # echo 1 00:26:11.739 [2024-06-09 09:04:29.901797] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device irdma0(0x1e94db0/0x20f6c80) succeed. 00:26:11.739 [2024-06-09 09:04:29.901981] rdma.c:3316:nvmf_rdma_retry_listen_port: *ERROR*: Found new IB device but port 192.168.100.9:4420 is still failed(-1) to listen. 00:26:11.739 [2024-06-09 09:04:29.902037] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: port active 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # seq 1 10 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@162 -- # for i in $(seq 1 10) 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # ls /sys/devices/pci0000:ae/0000:ae:02.0/0000:af:00.1/net 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@163 -- # new_net_dev=cvl_0_1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@164 -- # [[ -z cvl_0_1 ]] 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@166 -- # [[ cvl_0_1 != \c\v\l\_\0\_\1 ]] 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@171 -- # break 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@175 -- # [[ -z cvl_0_1 ]] 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@179 -- # ip link set cvl_0_1 up 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # get_ip_address cvl_0_1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # awk '{print $4}' 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- nvmf/common.sh@113 -- # cut -d/ -f1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@180 -- # [[ -z '' ]] 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@181 -- # ip addr add 192.168.100.9/24 dev cvl_0_1 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # seq 1 10 00:26:11.739 [2024-06-09 09:04:30.314538] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:26:11.739 [2024-06-09 09:04:30.314566] rdma.c:3322:nvmf_rdma_retry_listen_port: *NOTICE*: Port 192.168.100.9:4420 come back 00:26:11.739 [2024-06-09 09:04:30.314576] rdma.c:3852:nvmf_process_ib_event: *NOTICE*: Async event: GID table change 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@186 -- # for i in $(seq 1 10) 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # get_rdma_dev_count_in_nvmf_tgt 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@82 -- # local rdma_dev_name= 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # rpc_cmd nvmf_get_stats 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@83 -- # jq -r '.poll_groups[0].transports[].devices | length' 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@187 -- # ib_count=2 00:26:11.739 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@188 -- # (( ib_count > ib_count_after_remove )) 00:26:11.740 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@189 -- # break 00:26:11.740 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@200 -- # stop_bdevperf 00:26:11.740 09:04:30 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@116 -- # wait 1436091 00:27:33.190 0 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@118 -- # killprocess 1435984 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 1435984 ']' 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 1435984 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1435984 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_2 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_2 = sudo ']' 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1435984' 00:27:33.190 killing process with pid 1435984 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 1435984 00:27:33.190 09:05:51 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 1435984 00:27:33.190 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@119 -- # bdevperf_pid= 00:27:33.190 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@121 -- # cat /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:27:33.190 [2024-06-09 09:04:21.117654] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:33.190 [2024-06-09 09:04:21.117698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1435984 ] 00:27:33.190 EAL: No free 2048 kB hugepages reported on node 1 00:27:33.190 [2024-06-09 09:04:21.165374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.190 [2024-06-09 09:04:21.235451] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:33.190 Running I/O for 90 seconds... 00:27:33.190 [2024-06-09 09:04:26.637887] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:27:33.190 [2024-06-09 09:04:26.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:199632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f4000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:199640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f6000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:199648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077f8000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:199656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fa000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:199664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fc000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:199672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000077fe000 len:0x1000 key:0x3814cd94 00:27:33.190 [2024-06-09 09:04:26.638651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:199680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:199688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:199696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:199704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:199712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:199720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:199728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:199736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:199744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:199752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:199760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:199768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.190 [2024-06-09 09:04:26.638844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:199776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.190 [2024-06-09 09:04:26.638850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:199784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:199792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:199800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:199808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:199816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:199824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:199832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:199840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:199848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.638990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:199856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.638996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:199864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:199872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:199880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:199888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:199896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:199904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:199912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:199920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:199928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:199936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:199944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:199952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:199960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:199968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:199976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:199984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:199992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:200000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:200008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:200016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:200024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:200032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:200040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:200048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:200056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:200064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:200072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.191 [2024-06-09 09:04:26.639395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:200080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.191 [2024-06-09 09:04:26.639401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:200088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:200096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:200104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:200112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:200120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:200128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:200136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:200144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:200152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:200160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:200168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:200176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:200184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:200192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:200200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:200208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:200216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:200224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:200232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:200240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:200248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:200256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:200264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:200272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:200280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:200288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:200296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:200304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:200312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:200320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:200328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:200336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:200344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:200352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:200360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:200368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:200376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:200384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.192 [2024-06-09 09:04:26.639963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:200392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.192 [2024-06-09 09:04:26.639970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.639978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:200400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.639984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.639992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:200408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.639998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:200416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:200424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:200432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:200440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:200448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:200456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:200464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:200472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:200480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:200488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:200496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:200504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:200512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:200520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:200528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:200536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:200544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:200552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:200560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:200568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:200576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:200584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:200592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:200600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:200608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:200616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:200624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:200632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:200640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.193 [2024-06-09 09:04:26.640422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:33.193 [2024-06-09 09:04:26.640520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:33.193 [2024-06-09 09:04:26.640526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:200648 len:8 PRP1 0x0 PRP2 0x0 00:27:33.193 [2024-06-09 09:04:26.640533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640567] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192d7640 was disconnected and freed. reset controller. 00:27:33.193 [2024-06-09 09:04:26.640603] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno Invalid argument (22) 00:27:33.193 [2024-06-09 09:04:26.640613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.193 [2024-06-09 09:04:26.640619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.193 [2024-06-09 09:04:26.640633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.193 [2024-06-09 09:04:26.640649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.640655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.193 [2024-06-09 09:04:26.640662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:16 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.193 [2024-06-09 09:04:26.653986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.193 [2024-06-09 09:04:26.654000] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:27:33.194 [2024-06-09 09:04:26.654008] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:33.194 [2024-06-09 09:04:26.657009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:27:33.194 [2024-06-09 09:04:26.657174] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:27:33.194 [2024-06-09 09:04:26.657187] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:27:33.194 [2024-06-09 09:04:26.657194] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:33.194 [2024-06-09 09:04:26.657208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.194 [2024-06-09 09:04:26.657216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:27:33.194 [2024-06-09 09:04:26.657237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:27:33.194 [2024-06-09 09:04:26.657244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:27:33.194 [2024-06-09 09:04:26.657253] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:27:33.194 [2024-06-09 09:04:26.657282] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.194 [2024-06-09 09:04:26.657290] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:27:33.194 [2024-06-09 09:04:28.550414] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:27:33.194 [2024-06-09 09:04:28.550444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.194 [2024-06-09 09:04:28.550453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:6 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.550461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.194 [2024-06-09 09:04:28.550469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:6 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.550477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.194 [2024-06-09 09:04:28.550483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:6 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.550489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.194 [2024-06-09 09:04:28.550497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32664 cdw0:6 sqhd:63b9 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.551019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.194 [2024-06-09 09:04:28.551033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:27:33.194 [2024-06-09 09:04:28.551220] rdma_verbs.c: 113:spdk_rdma_qp_disconnect: *ERROR*: rdma_disconnect failed, errno No such device or address (6) 00:27:33.194 [2024-06-09 09:04:28.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:194104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.551994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:194112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:194120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:194128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:194136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:194144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:194152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:194160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:194168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:194176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:194184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:194192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:194200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:194208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.194 [2024-06-09 09:04:28.552193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.194 [2024-06-09 09:04:28.552201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:194216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.195 [2024-06-09 09:04:28.552215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:194224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.195 [2024-06-09 09:04:28.552230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:194232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.195 [2024-06-09 09:04:28.552246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:194240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.195 [2024-06-09 09:04:28.552260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:194248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.195 [2024-06-09 09:04:28.552275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:194256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.195 [2024-06-09 09:04:28.552282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:194264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:194272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:194280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:194288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:194296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:194304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:194312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:194320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:194328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:194336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:194344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:194352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:194360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:194368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:194376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:194384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:194392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:194400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:194408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:194416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:194424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:194432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:194440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:194448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:194456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:194464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:194472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:194480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:194488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:194496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:194504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:194512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:194520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:194528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:194536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:194544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:194552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:33.196 [2024-06-09 09:04:28.552836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:193536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fe000 len:0x1000 key:0x55ccd231 00:27:33.196 [2024-06-09 09:04:28.552851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.196 [2024-06-09 09:04:28.552859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:193544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fc000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:193552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079fa000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:193560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f8000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:193568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f6000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:193576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f4000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:193584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f2000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:193592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079f0000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:193600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ee000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:193608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ec000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.552982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:193616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ea000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:193624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e8000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:193632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e6000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:193640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e4000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:193648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e2000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:193656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079e0000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:193664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079de000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:193672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079dc000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:193680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079da000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:193688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d8000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:193696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d6000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:193704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d4000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:193712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d2000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:193720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079d0000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:193728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ce000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:193736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079cc000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:193744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ca000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:193752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c8000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:193760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c6000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:193768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c4000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:193776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c2000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:193784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079c0000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:193792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079be000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:193800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079bc000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:193808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ba000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:193816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b8000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:193824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b6000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.197 [2024-06-09 09:04:28.553395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:193832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b4000 len:0x1000 key:0x55ccd231 00:27:33.197 [2024-06-09 09:04:28.553401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:193840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b2000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:193848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079b0000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:193856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ae000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:193864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079ac000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:193872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079aa000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:193880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a8000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:193888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a6000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:193896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a4000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:193904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a2000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:193912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000079a0000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:193920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799e000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:193928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799c000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:193936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000799a000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:193944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007998000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:193952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007996000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:193960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007994000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:193968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007992000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:193976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007990000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:193984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798e000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:193992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798c000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:194000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000798a000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:194008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007988000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:194016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007986000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:194024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007984000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:194032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007982000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:194040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007980000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:194048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797e000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:194056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797c000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:194064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000797a000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:194072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007978000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:194080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007976000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.553879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:194088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007974000 len:0x1000 key:0x55ccd231 00:27:33.198 [2024-06-09 09:04:28.553887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:32664 cdw0:8cb695a0 sqhd:9540 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.566617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:33.198 [2024-06-09 09:04:28.566632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:33.198 [2024-06-09 09:04:28.566640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:194096 len:8 PRP1 0x0 PRP2 0x0 00:27:33.198 [2024-06-09 09:04:28.566649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.198 [2024-06-09 09:04:28.566690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:27:33.198 [2024-06-09 09:04:28.566981] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_ADDR_ERROR (1) from CM event channel (status = -19) 00:27:33.198 [2024-06-09 09:04:28.566995] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:27:33.198 [2024-06-09 09:04:28.567002] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:27:33.198 [2024-06-09 09:04:28.567016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.198 [2024-06-09 09:04:28.567024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:27:33.199 [2024-06-09 09:04:28.567035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:27:33.199 [2024-06-09 09:04:28.567043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:27:33.199 [2024-06-09 09:04:28.567052] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:27:33.199 [2024-06-09 09:04:28.567071] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.199 [2024-06-09 09:04:28.567078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:27:33.199 [2024-06-09 09:04:28.662235] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:27:33.199 [2024-06-09 09:04:28.662256] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:27:33.199 [2024-06-09 09:04:28.662274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.199 [2024-06-09 09:04:28.662281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] in failed state. 00:27:33.199 [2024-06-09 09:04:28.662291] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] Ctrlr is in error state 00:27:33.199 [2024-06-09 09:04:28.662297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_0] controller reinitialization failed 00:27:33.199 [2024-06-09 09:04:28.662304] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] already in failed state 00:27:33.199 [2024-06-09 09:04:28.662320] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.199 [2024-06-09 09:04:28.662327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_0] resetting controller 00:27:33.199 [2024-06-09 09:04:29.725314] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:33.199 [2024-06-09 09:04:30.572112] nvme_rdma.c:1085:nvme_rdma_addr_resolved: *ERROR*: RDMA address resolution error 00:27:33.199 [2024-06-09 09:04:30.572135] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e4280 00:27:33.199 [2024-06-09 09:04:30.572159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:27:33.199 [2024-06-09 09:04:30.572173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] in failed state. 00:27:33.199 [2024-06-09 09:04:30.572184] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] Ctrlr is in error state 00:27:33.199 [2024-06-09 09:04:30.572190] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:system_cvl_0_1] controller reinitialization failed 00:27:33.199 [2024-06-09 09:04:30.572198] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] already in failed state 00:27:33.199 [2024-06-09 09:04:30.572216] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:33.199 [2024-06-09 09:04:30.572223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:system_cvl_0_1] resetting controller 00:27:33.199 [2024-06-09 09:04:31.620647] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:33.199 00:27:33.199 Latency(us) 00:27:33.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.199 Job: Nvme_cvl_0_0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:33.199 Verification LBA range: start 0x0 length 0x8000 00:27:33.199 Nvme_cvl_0_0n1 : 90.01 11117.26 43.43 0.00 0.00 11493.79 2215.74 4042510.14 00:27:33.199 Job: Nvme_cvl_0_1n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:33.199 Verification LBA range: start 0x0 length 0x8000 00:27:33.199 Nvme_cvl_0_1n1 : 90.01 10844.40 42.36 0.00 0.00 11783.26 2465.40 4042510.14 00:27:33.199 =================================================================================================================== 00:27:33.199 Total : 21961.67 85.79 0.00 0.00 11636.72 2215.74 4042510.14 00:27:33.199 Received shutdown signal, test time was about 90.000000 seconds 00:27:33.199 00:27:33.199 Latency(us) 00:27:33.199 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.199 =================================================================================================================== 00:27:33.199 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@124 -- # rm -f /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/try.txt 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@202 -- # killprocess 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@949 -- # '[' -z 1435819 ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@953 -- # kill -0 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # uname 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1435819' 00:27:33.199 killing process with pid 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@968 -- # kill 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@973 -- # wait 1435819 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@203 -- # nvmfpid= 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- target/device_removal.sh@205 -- # return 0 00:27:33.199 00:27:33.199 real 1m32.510s 00:27:33.199 user 4m35.237s 00:27:33.199 sys 0m1.652s 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal.nvmf_device_removal_pci_remove -- common/autotest_common.sh@10 -- # set +x 00:27:33.199 ************************************ 00:27:33.199 END TEST nvmf_device_removal_pci_remove 00:27:33.199 ************************************ 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@317 -- # nvmftestfini 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@117 -- # sync 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@120 -- # set +e 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:33.199 rmmod nvme_rdma 00:27:33.199 rmmod nvme_fabrics 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@124 -- # set -e 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@125 -- # return 0 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@318 -- # clean_bond_device 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # ip link 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- target/device_removal.sh@240 -- # grep bond_nvmf 00:27:33.199 00:27:33.199 real 3m11.082s 00:27:33.199 user 9m13.924s 00:27:33.199 sys 0m7.315s 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:33.199 09:05:52 nvmf_rdma.nvmf_device_removal -- common/autotest_common.sh@10 -- # set +x 00:27:33.199 ************************************ 00:27:33.199 END TEST nvmf_device_removal 00:27:33.199 ************************************ 00:27:33.199 09:05:52 nvmf_rdma -- nvmf/nvmf.sh@80 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:33.199 09:05:52 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:33.199 09:05:52 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:33.199 09:05:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:33.199 ************************************ 00:27:33.199 START TEST nvmf_srq_overwhelm 00:27:33.199 ************************************ 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:27:33.199 * Looking for test storage... 00:27:33.199 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:33.199 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.200 09:05:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:35.106 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:35.106 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # modinfo irdma 00:27:35.106 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:35.107 Found net devices under 0000:af:00.0: cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:35.107 Found net devices under 0000:af:00.1: cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:27:35.107 12: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:35.107 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:27:35.107 altname enp175s0f0np0 00:27:35.107 altname ens801f0np0 00:27:35.107 inet 192.168.100.8/24 scope global cvl_0_0 00:27:35.107 valid_lft forever preferred_lft forever 00:27:35.107 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:27:35.107 valid_lft forever preferred_lft forever 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:27:35.107 13: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:27:35.107 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:27:35.107 altname enp175s0f1np1 00:27:35.107 altname ens801f1np1 00:27:35.107 inet 192.168.100.9/24 scope global cvl_0_1 00:27:35.107 valid_lft forever preferred_lft forever 00:27:35.107 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:27:35.107 valid_lft forever preferred_lft forever 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:27:35.107 192.168.100.9' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:27:35.107 192.168.100.9' 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:27:35.107 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:27:35.108 192.168.100.9' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@723 -- # xtrace_disable 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1453668 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1453668 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # '[' -z 1453668 ']' 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local max_retries=100 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # xtrace_disable 00:27:35.108 09:05:57 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:35.108 [2024-06-09 09:05:57.643492] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:27:35.108 [2024-06-09 09:05:57.643553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.367 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.367 [2024-06-09 09:05:57.699364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:35.367 [2024-06-09 09:05:57.778321] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.367 [2024-06-09 09:05:57.778359] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.367 [2024-06-09 09:05:57.778366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.367 [2024-06-09 09:05:57.778371] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.367 [2024-06-09 09:05:57.778376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.367 [2024-06-09 09:05:57.778416] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.367 [2024-06-09 09:05:57.778433] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.367 [2024-06-09 09:05:57.778522] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:27:35.367 [2024-06-09 09:05:57.778523] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@863 -- # return 0 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@729 -- # xtrace_disable 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:35.934 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.193 [2024-06-09 09:05:58.502312] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x8678f0/0x866f30) succeed. 00:27:36.193 [2024-06-09 09:05:58.511178] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x868ca0/0x8674b0) succeed. 00:27:36.193 [2024-06-09 09:05:58.511197] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.193 Malloc0 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.193 [2024-06-09 09:05:58.570193] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.193 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme0n1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme0n1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.453 Malloc1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.453 09:05:58 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme1n1 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme1n1 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.712 Malloc2 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.712 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme2n1 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme2n1 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.971 Malloc3 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:36.971 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme3n1 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme3n1 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.231 Malloc4 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.231 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme4n1 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme4n1 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.490 Malloc5 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.490 09:05:59 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 --hostid=801347e8-3fd0-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1234 -- # local i=0 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # grep -q -w nvme5n1 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # lsblk -l -o NAME 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # lsblk -l -o NAME 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1241 -- # grep -q -w nvme5n1 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1245 -- # return 0 00:27:37.749 09:06:00 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:27:37.749 [global] 00:27:37.749 thread=1 00:27:37.749 invalidate=1 00:27:37.749 rw=read 00:27:37.749 time_based=1 00:27:37.749 runtime=10 00:27:37.749 ioengine=libaio 00:27:37.749 direct=1 00:27:37.749 bs=1048576 00:27:37.749 iodepth=128 00:27:37.749 norandommap=1 00:27:37.749 numjobs=13 00:27:37.749 00:27:37.749 [job0] 00:27:37.749 filename=/dev/nvme0n1 00:27:37.749 [job1] 00:27:37.749 filename=/dev/nvme2n1 00:27:37.749 [job2] 00:27:37.749 filename=/dev/nvme3n1 00:27:37.749 [job3] 00:27:37.749 filename=/dev/nvme4n1 00:27:37.749 [job4] 00:27:37.749 filename=/dev/nvme5n1 00:27:37.749 [job5] 00:27:37.749 filename=/dev/nvme6n1 00:27:38.006 Could not set queue depth (nvme0n1) 00:27:38.006 Could not set queue depth (nvme2n1) 00:27:38.006 Could not set queue depth (nvme3n1) 00:27:38.006 Could not set queue depth (nvme4n1) 00:27:38.007 Could not set queue depth (nvme5n1) 00:27:38.007 Could not set queue depth (nvme6n1) 00:27:38.265 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:38.265 ... 00:27:38.265 fio-3.35 00:27:38.265 Starting 78 threads 00:27:50.473 00:27:50.473 job0: (groupid=0, jobs=1): err= 0: pid=1454413: Sun Jun 9 09:06:11 2024 00:27:50.473 read: IOPS=61, BW=61.8MiB/s (64.8MB/s)(625MiB/10112msec) 00:27:50.473 slat (usec): min=31, max=142602, avg=16007.44, stdev=25152.88 00:27:50.473 clat (msec): min=103, max=2847, avg=1881.42, stdev=628.62 00:27:50.473 lat (msec): min=122, max=2849, avg=1897.43, stdev=629.71 00:27:50.473 clat percentiles (msec): 00:27:50.473 | 1.00th=[ 131], 5.00th=[ 418], 10.00th=[ 760], 20.00th=[ 1737], 00:27:50.473 | 30.00th=[ 1804], 40.00th=[ 1871], 50.00th=[ 1921], 60.00th=[ 2005], 00:27:50.473 | 70.00th=[ 2165], 80.00th=[ 2400], 90.00th=[ 2668], 95.00th=[ 2702], 00:27:50.473 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2836], 99.95th=[ 2836], 00:27:50.473 | 99.99th=[ 2836] 00:27:50.473 bw ( KiB/s): min=28672, max=106496, per=1.22%, avg=59983.71, stdev=22404.02, samples=17 00:27:50.473 iops : min= 28, max= 104, avg=58.47, stdev=21.96, samples=17 00:27:50.473 lat (msec) : 250=2.40%, 500=3.52%, 750=2.88%, 1000=4.00%, 2000=47.04% 00:27:50.473 lat (msec) : >=2000=40.16% 00:27:50.473 cpu : usr=0.02%, sys=1.15%, ctx=975, majf=0, minf=32769 00:27:50.473 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:27:50.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.473 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.473 issued rwts: total=625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.473 job0: (groupid=0, jobs=1): err= 0: pid=1454414: Sun Jun 9 09:06:11 2024 00:27:50.473 read: IOPS=96, BW=96.2MiB/s (101MB/s)(979MiB/10176msec) 00:27:50.473 slat (usec): min=49, max=175166, avg=10223.71, stdev=34980.33 00:27:50.473 clat (msec): min=162, max=1626, avg=1279.64, stdev=239.28 00:27:50.473 lat (msec): min=302, max=1626, avg=1289.86, stdev=239.99 00:27:50.473 clat percentiles (msec): 00:27:50.473 | 1.00th=[ 326], 5.00th=[ 810], 10.00th=[ 986], 20.00th=[ 1200], 00:27:50.473 | 30.00th=[ 1250], 40.00th=[ 1284], 50.00th=[ 1318], 60.00th=[ 1368], 00:27:50.473 | 70.00th=[ 1401], 80.00th=[ 1435], 90.00th=[ 1469], 95.00th=[ 1569], 00:27:50.473 | 99.00th=[ 1620], 99.50th=[ 1620], 99.90th=[ 1620], 99.95th=[ 1620], 00:27:50.473 | 99.99th=[ 1620] 00:27:50.473 bw ( KiB/s): min=51097, max=126976, per=1.86%, avg=91809.63, stdev=16343.78, samples=19 00:27:50.473 iops : min= 49, max= 124, avg=89.53, stdev=16.03, samples=19 00:27:50.473 lat (msec) : 250=0.10%, 500=2.25%, 750=2.55%, 1000=5.62%, 2000=89.48% 00:27:50.473 cpu : usr=0.05%, sys=1.50%, ctx=861, majf=0, minf=32769 00:27:50.473 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:27:50.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.473 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.473 issued rwts: total=979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.473 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.473 job0: (groupid=0, jobs=1): err= 0: pid=1454415: Sun Jun 9 09:06:11 2024 00:27:50.473 read: IOPS=64, BW=64.6MiB/s (67.8MB/s)(655MiB/10132msec) 00:27:50.473 slat (usec): min=42, max=459638, avg=15278.86, stdev=33513.68 00:27:50.473 clat (msec): min=121, max=3224, avg=1743.30, stdev=680.52 00:27:50.473 lat (msec): min=132, max=3226, avg=1758.57, stdev=682.89 00:27:50.473 clat percentiles (msec): 00:27:50.473 | 1.00th=[ 146], 5.00th=[ 659], 10.00th=[ 1150], 20.00th=[ 1301], 00:27:50.473 | 30.00th=[ 1368], 40.00th=[ 1452], 50.00th=[ 1569], 60.00th=[ 1653], 00:27:50.473 | 70.00th=[ 2072], 80.00th=[ 2433], 90.00th=[ 2735], 95.00th=[ 3071], 00:27:50.473 | 99.00th=[ 3138], 99.50th=[ 3171], 99.90th=[ 3239], 99.95th=[ 3239], 00:27:50.473 | 99.99th=[ 3239] 00:27:50.473 bw ( KiB/s): min=16384, max=122880, per=1.37%, avg=67575.81, stdev=27365.31, samples=16 00:27:50.474 iops : min= 16, max= 120, avg=65.94, stdev=26.73, samples=16 00:27:50.474 lat (msec) : 250=1.53%, 500=1.83%, 750=2.90%, 1000=2.44%, 2000=60.15% 00:27:50.474 lat (msec) : >=2000=31.15% 00:27:50.474 cpu : usr=0.03%, sys=1.04%, ctx=976, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.9%, >=64=90.4% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.474 issued rwts: total=655,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454416: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=67, BW=67.6MiB/s (70.8MB/s)(686MiB/10154msec) 00:27:50.474 slat (usec): min=32, max=194982, avg=14590.98, stdev=24527.09 00:27:50.474 clat (msec): min=140, max=2996, avg=1729.64, stdev=575.70 00:27:50.474 lat (msec): min=158, max=3003, avg=1744.23, stdev=576.77 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 201], 5.00th=[ 592], 10.00th=[ 1167], 20.00th=[ 1435], 00:27:50.474 | 30.00th=[ 1485], 40.00th=[ 1536], 50.00th=[ 1670], 60.00th=[ 1838], 00:27:50.474 | 70.00th=[ 2022], 80.00th=[ 2140], 90.00th=[ 2433], 95.00th=[ 2802], 00:27:50.474 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 3004], 99.95th=[ 3004], 00:27:50.474 | 99.99th=[ 3004] 00:27:50.474 bw ( KiB/s): min=22483, max=100352, per=1.45%, avg=71521.44, stdev=25972.03, samples=16 00:27:50.474 iops : min= 21, max= 98, avg=69.56, stdev=25.64, samples=16 00:27:50.474 lat (msec) : 250=1.31%, 500=3.21%, 750=3.35%, 1000=1.02%, 2000=57.73% 00:27:50.474 lat (msec) : >=2000=33.38% 00:27:50.474 cpu : usr=0.04%, sys=1.40%, ctx=972, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.474 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454417: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=51, BW=51.4MiB/s (53.9MB/s)(522MiB/10163msec) 00:27:50.474 slat (usec): min=39, max=180027, avg=19202.45, stdev=22146.88 00:27:50.474 clat (msec): min=135, max=3607, avg=2192.72, stdev=728.42 00:27:50.474 lat (msec): min=209, max=3615, avg=2211.92, stdev=728.32 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 296], 5.00th=[ 776], 10.00th=[ 1150], 20.00th=[ 1620], 00:27:50.474 | 30.00th=[ 1871], 40.00th=[ 2165], 50.00th=[ 2366], 60.00th=[ 2433], 00:27:50.474 | 70.00th=[ 2500], 80.00th=[ 2769], 90.00th=[ 3104], 95.00th=[ 3339], 00:27:50.474 | 99.00th=[ 3540], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:27:50.474 | 99.99th=[ 3608] 00:27:50.474 bw ( KiB/s): min=18395, max=110592, per=1.02%, avg=50406.87, stdev=21818.70, samples=16 00:27:50.474 iops : min= 17, max= 108, avg=49.00, stdev=21.32, samples=16 00:27:50.474 lat (msec) : 250=0.57%, 500=2.11%, 750=2.11%, 1000=3.26%, 2000=27.39% 00:27:50.474 lat (msec) : >=2000=64.56% 00:27:50.474 cpu : usr=0.00%, sys=1.15%, ctx=965, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=87.9% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.474 issued rwts: total=522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454418: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=51, BW=51.6MiB/s (54.1MB/s)(524MiB/10159msec) 00:27:50.474 slat (usec): min=41, max=218719, avg=19118.52, stdev=30325.51 00:27:50.474 clat (msec): min=137, max=3284, avg=2128.70, stdev=734.75 00:27:50.474 lat (msec): min=244, max=3310, avg=2147.82, stdev=734.49 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 259], 5.00th=[ 751], 10.00th=[ 1267], 20.00th=[ 1485], 00:27:50.474 | 30.00th=[ 1703], 40.00th=[ 1921], 50.00th=[ 2232], 60.00th=[ 2400], 00:27:50.474 | 70.00th=[ 2534], 80.00th=[ 2836], 90.00th=[ 3205], 95.00th=[ 3239], 00:27:50.474 | 99.00th=[ 3272], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:27:50.474 | 99.99th=[ 3272] 00:27:50.474 bw ( KiB/s): min=14336, max=102400, per=1.10%, avg=54058.93, stdev=27164.79, samples=15 00:27:50.474 iops : min= 14, max= 100, avg=52.73, stdev=26.44, samples=15 00:27:50.474 lat (msec) : 250=0.76%, 500=2.67%, 750=1.53%, 1000=1.91%, 2000=36.45% 00:27:50.474 lat (msec) : >=2000=56.68% 00:27:50.474 cpu : usr=0.01%, sys=1.05%, ctx=922, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.474 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454419: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=38, BW=39.0MiB/s (40.9MB/s)(397MiB/10189msec) 00:27:50.474 slat (usec): min=37, max=205064, avg=25310.76, stdev=32539.13 00:27:50.474 clat (msec): min=137, max=5383, avg=2973.11, stdev=1363.37 00:27:50.474 lat (msec): min=273, max=5405, avg=2998.43, stdev=1366.71 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 300], 5.00th=[ 802], 10.00th=[ 1200], 20.00th=[ 1888], 00:27:50.474 | 30.00th=[ 2232], 40.00th=[ 2433], 50.00th=[ 2702], 60.00th=[ 3239], 00:27:50.474 | 70.00th=[ 3742], 80.00th=[ 4530], 90.00th=[ 5000], 95.00th=[ 5134], 00:27:50.474 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:27:50.474 | 99.99th=[ 5403] 00:27:50.474 bw ( KiB/s): min=12288, max=71680, per=0.70%, avg=34432.00, stdev=17763.28, samples=16 00:27:50.474 iops : min= 12, max= 70, avg=33.62, stdev=17.35, samples=16 00:27:50.474 lat (msec) : 250=0.25%, 500=2.77%, 750=1.76%, 1000=3.53%, 2000=13.10% 00:27:50.474 lat (msec) : >=2000=78.59% 00:27:50.474 cpu : usr=0.06%, sys=1.04%, ctx=900, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:50.474 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454420: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=67, BW=67.1MiB/s (70.3MB/s)(680MiB/10141msec) 00:27:50.474 slat (usec): min=31, max=139425, avg=14755.00, stdev=22134.84 00:27:50.474 clat (msec): min=103, max=2462, avg=1788.28, stdev=421.69 00:27:50.474 lat (msec): min=146, max=2467, avg=1803.03, stdev=420.46 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 271], 5.00th=[ 927], 10.00th=[ 1267], 20.00th=[ 1552], 00:27:50.474 | 30.00th=[ 1670], 40.00th=[ 1770], 50.00th=[ 1871], 60.00th=[ 1938], 00:27:50.474 | 70.00th=[ 1989], 80.00th=[ 2106], 90.00th=[ 2265], 95.00th=[ 2333], 00:27:50.474 | 99.00th=[ 2433], 99.50th=[ 2433], 99.90th=[ 2467], 99.95th=[ 2467], 00:27:50.474 | 99.99th=[ 2467] 00:27:50.474 bw ( KiB/s): min=40960, max=100352, per=1.35%, avg=66494.47, stdev=18240.12, samples=17 00:27:50.474 iops : min= 40, max= 98, avg=64.88, stdev=17.88, samples=17 00:27:50.474 lat (msec) : 250=0.44%, 500=1.91%, 750=1.18%, 1000=2.35%, 2000=64.56% 00:27:50.474 lat (msec) : >=2000=29.56% 00:27:50.474 cpu : usr=0.02%, sys=1.30%, ctx=933, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.474 issued rwts: total=680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454421: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=47, BW=47.4MiB/s (49.7MB/s)(480MiB/10126msec) 00:27:50.474 slat (usec): min=42, max=132411, avg=20873.51, stdev=25415.65 00:27:50.474 clat (msec): min=104, max=3121, avg=2366.61, stdev=694.73 00:27:50.474 lat (msec): min=164, max=3133, avg=2387.49, stdev=694.05 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 220], 5.00th=[ 667], 10.00th=[ 1217], 20.00th=[ 2039], 00:27:50.474 | 30.00th=[ 2400], 40.00th=[ 2534], 50.00th=[ 2635], 60.00th=[ 2702], 00:27:50.474 | 70.00th=[ 2769], 80.00th=[ 2836], 90.00th=[ 2937], 95.00th=[ 3004], 00:27:50.474 | 99.00th=[ 3071], 99.50th=[ 3104], 99.90th=[ 3138], 99.95th=[ 3138], 00:27:50.474 | 99.99th=[ 3138] 00:27:50.474 bw ( KiB/s): min=20480, max=57344, per=0.91%, avg=45043.69, stdev=9914.61, samples=16 00:27:50.474 iops : min= 20, max= 56, avg=43.88, stdev= 9.64, samples=16 00:27:50.474 lat (msec) : 250=1.25%, 500=2.08%, 750=3.54%, 1000=2.08%, 2000=9.79% 00:27:50.474 lat (msec) : >=2000=81.25% 00:27:50.474 cpu : usr=0.00%, sys=1.08%, ctx=970, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.9% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.474 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.474 job0: (groupid=0, jobs=1): err= 0: pid=1454422: Sun Jun 9 09:06:11 2024 00:27:50.474 read: IOPS=56, BW=56.9MiB/s (59.7MB/s)(578MiB/10160msec) 00:27:50.474 slat (usec): min=48, max=133978, avg=17355.20, stdev=25873.82 00:27:50.474 clat (msec): min=125, max=3196, avg=2110.15, stdev=584.01 00:27:50.474 lat (msec): min=167, max=3200, avg=2127.50, stdev=582.71 00:27:50.474 clat percentiles (msec): 00:27:50.474 | 1.00th=[ 317], 5.00th=[ 1053], 10.00th=[ 1435], 20.00th=[ 1687], 00:27:50.474 | 30.00th=[ 1871], 40.00th=[ 2005], 50.00th=[ 2106], 60.00th=[ 2265], 00:27:50.474 | 70.00th=[ 2400], 80.00th=[ 2601], 90.00th=[ 2869], 95.00th=[ 3037], 00:27:50.474 | 99.00th=[ 3071], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:27:50.474 | 99.99th=[ 3205] 00:27:50.474 bw ( KiB/s): min=20480, max=94208, per=1.10%, avg=54177.18, stdev=26377.48, samples=17 00:27:50.474 iops : min= 20, max= 92, avg=52.65, stdev=25.73, samples=17 00:27:50.474 lat (msec) : 250=0.87%, 500=1.21%, 750=1.21%, 1000=1.56%, 2000=35.12% 00:27:50.474 lat (msec) : >=2000=60.03% 00:27:50.474 cpu : usr=0.02%, sys=1.30%, ctx=997, majf=0, minf=32769 00:27:50.474 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:27:50.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.474 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job0: (groupid=0, jobs=1): err= 0: pid=1454423: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=75, BW=75.4MiB/s (79.1MB/s)(768MiB/10185msec) 00:27:50.475 slat (usec): min=34, max=123615, avg=13068.11, stdev=20873.45 00:27:50.475 clat (msec): min=144, max=2723, avg=1524.89, stdev=454.73 00:27:50.475 lat (msec): min=188, max=2752, avg=1537.96, stdev=455.25 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 247], 5.00th=[ 818], 10.00th=[ 1099], 20.00th=[ 1250], 00:27:50.475 | 30.00th=[ 1334], 40.00th=[ 1418], 50.00th=[ 1552], 60.00th=[ 1603], 00:27:50.475 | 70.00th=[ 1653], 80.00th=[ 1720], 90.00th=[ 2232], 95.00th=[ 2534], 00:27:50.475 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2735], 99.95th=[ 2735], 00:27:50.475 | 99.99th=[ 2735] 00:27:50.475 bw ( KiB/s): min=59392, max=120832, per=1.77%, avg=87381.33, stdev=19253.48, samples=15 00:27:50.475 iops : min= 58, max= 118, avg=85.33, stdev=18.80, samples=15 00:27:50.475 lat (msec) : 250=1.04%, 500=1.30%, 750=1.82%, 1000=4.30%, 2000=80.47% 00:27:50.475 lat (msec) : >=2000=11.07% 00:27:50.475 cpu : usr=0.05%, sys=1.33%, ctx=1003, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job0: (groupid=0, jobs=1): err= 0: pid=1454424: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=54, BW=54.9MiB/s (57.6MB/s)(555MiB/10103msec) 00:27:50.475 slat (usec): min=44, max=140966, avg=18081.10, stdev=24617.18 00:27:50.475 clat (msec): min=65, max=3191, avg=2100.86, stdev=713.85 00:27:50.475 lat (msec): min=177, max=3193, avg=2118.94, stdev=714.83 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 182], 5.00th=[ 468], 10.00th=[ 860], 20.00th=[ 1754], 00:27:50.475 | 30.00th=[ 2022], 40.00th=[ 2165], 50.00th=[ 2265], 60.00th=[ 2366], 00:27:50.475 | 70.00th=[ 2467], 80.00th=[ 2601], 90.00th=[ 2869], 95.00th=[ 3037], 00:27:50.475 | 99.00th=[ 3171], 99.50th=[ 3171], 99.90th=[ 3205], 99.95th=[ 3205], 00:27:50.475 | 99.99th=[ 3205] 00:27:50.475 bw ( KiB/s): min=22483, max=86016, per=1.11%, avg=54653.19, stdev=20136.66, samples=16 00:27:50.475 iops : min= 21, max= 84, avg=53.31, stdev=19.77, samples=16 00:27:50.475 lat (msec) : 100=0.18%, 250=2.70%, 500=2.88%, 750=3.24%, 1000=1.98% 00:27:50.475 lat (msec) : 2000=17.84%, >=2000=71.17% 00:27:50.475 cpu : usr=0.01%, sys=1.13%, ctx=979, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job0: (groupid=0, jobs=1): err= 0: pid=1454425: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=56, BW=56.3MiB/s (59.1MB/s)(572MiB/10153msec) 00:27:50.475 slat (usec): min=39, max=156715, avg=17489.79, stdev=25366.50 00:27:50.475 clat (msec): min=145, max=2862, avg=2021.78, stdev=555.55 00:27:50.475 lat (msec): min=177, max=2865, avg=2039.27, stdev=555.54 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 186], 5.00th=[ 472], 10.00th=[ 1234], 20.00th=[ 1955], 00:27:50.475 | 30.00th=[ 2039], 40.00th=[ 2089], 50.00th=[ 2140], 60.00th=[ 2232], 00:27:50.475 | 70.00th=[ 2299], 80.00th=[ 2366], 90.00th=[ 2467], 95.00th=[ 2534], 00:27:50.475 | 99.00th=[ 2735], 99.50th=[ 2869], 99.90th=[ 2869], 99.95th=[ 2869], 00:27:50.475 | 99.99th=[ 2869] 00:27:50.475 bw ( KiB/s): min= 8192, max=83968, per=1.09%, avg=53591.06, stdev=19482.52, samples=17 00:27:50.475 iops : min= 8, max= 82, avg=52.18, stdev=19.05, samples=17 00:27:50.475 lat (msec) : 250=2.10%, 500=2.97%, 750=2.62%, 1000=0.52%, 2000=15.73% 00:27:50.475 lat (msec) : >=2000=76.05% 00:27:50.475 cpu : usr=0.04%, sys=1.19%, ctx=949, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job1: (groupid=0, jobs=1): err= 0: pid=1454437: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=60, BW=60.5MiB/s (63.5MB/s)(612MiB/10112msec) 00:27:50.475 slat (usec): min=51, max=176534, avg=16336.60, stdev=36331.83 00:27:50.475 clat (msec): min=110, max=4365, avg=1969.89, stdev=1087.65 00:27:50.475 lat (msec): min=116, max=4415, avg=1986.22, stdev=1092.55 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 194], 5.00th=[ 430], 10.00th=[ 1150], 20.00th=[ 1284], 00:27:50.475 | 30.00th=[ 1334], 40.00th=[ 1418], 50.00th=[ 1435], 60.00th=[ 1536], 00:27:50.475 | 70.00th=[ 2567], 80.00th=[ 3138], 90.00th=[ 3708], 95.00th=[ 4111], 00:27:50.475 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:27:50.475 | 99.99th=[ 4396] 00:27:50.475 bw ( KiB/s): min=10240, max=96256, per=1.12%, avg=55171.00, stdev=32294.93, samples=18 00:27:50.475 iops : min= 10, max= 94, avg=53.78, stdev=31.50, samples=18 00:27:50.475 lat (msec) : 250=2.12%, 500=3.10%, 750=3.10%, 1000=0.82%, 2000=56.21% 00:27:50.475 lat (msec) : >=2000=34.64% 00:27:50.475 cpu : usr=0.02%, sys=1.31%, ctx=859, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=612,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job1: (groupid=0, jobs=1): err= 0: pid=1454438: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=59, BW=59.2MiB/s (62.1MB/s)(599MiB/10119msec) 00:27:50.475 slat (usec): min=36, max=149223, avg=16701.88, stdev=32346.71 00:27:50.475 clat (msec): min=112, max=2689, avg=1924.30, stdev=432.16 00:27:50.475 lat (msec): min=182, max=2725, avg=1941.00, stdev=430.36 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 334], 5.00th=[ 961], 10.00th=[ 1334], 20.00th=[ 1737], 00:27:50.475 | 30.00th=[ 1888], 40.00th=[ 1989], 50.00th=[ 2022], 60.00th=[ 2056], 00:27:50.475 | 70.00th=[ 2106], 80.00th=[ 2232], 90.00th=[ 2333], 95.00th=[ 2400], 00:27:50.475 | 99.00th=[ 2601], 99.50th=[ 2635], 99.90th=[ 2702], 99.95th=[ 2702], 00:27:50.475 | 99.99th=[ 2702] 00:27:50.475 bw ( KiB/s): min=26624, max=104448, per=1.22%, avg=60408.25, stdev=23917.21, samples=16 00:27:50.475 iops : min= 26, max= 102, avg=58.88, stdev=23.51, samples=16 00:27:50.475 lat (msec) : 250=0.50%, 500=1.34%, 750=1.34%, 1000=1.84%, 2000=38.23% 00:27:50.475 lat (msec) : >=2000=56.76% 00:27:50.475 cpu : usr=0.03%, sys=1.05%, ctx=946, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job1: (groupid=0, jobs=1): err= 0: pid=1454439: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=51, BW=51.4MiB/s (53.9MB/s)(523MiB/10173msec) 00:27:50.475 slat (usec): min=52, max=90475, avg=19154.16, stdev=19006.20 00:27:50.475 clat (msec): min=152, max=3077, avg=2310.38, stdev=585.08 00:27:50.475 lat (msec): min=222, max=3084, avg=2329.54, stdev=583.81 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 388], 5.00th=[ 894], 10.00th=[ 1552], 20.00th=[ 2140], 00:27:50.475 | 30.00th=[ 2232], 40.00th=[ 2299], 50.00th=[ 2400], 60.00th=[ 2534], 00:27:50.475 | 70.00th=[ 2668], 80.00th=[ 2769], 90.00th=[ 2869], 95.00th=[ 2937], 00:27:50.475 | 99.00th=[ 2970], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3071], 00:27:50.475 | 99.99th=[ 3071] 00:27:50.475 bw ( KiB/s): min=32768, max=69632, per=0.96%, avg=47585.88, stdev=10304.55, samples=17 00:27:50.475 iops : min= 32, max= 68, avg=46.47, stdev=10.06, samples=17 00:27:50.475 lat (msec) : 250=0.76%, 500=1.34%, 750=2.10%, 1000=1.72%, 2000=11.66% 00:27:50.475 lat (msec) : >=2000=82.41% 00:27:50.475 cpu : usr=0.02%, sys=1.35%, ctx=1026, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.475 issued rwts: total=523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.475 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.475 job1: (groupid=0, jobs=1): err= 0: pid=1454440: Sun Jun 9 09:06:11 2024 00:27:50.475 read: IOPS=64, BW=64.0MiB/s (67.1MB/s)(646MiB/10090msec) 00:27:50.475 slat (usec): min=29, max=167820, avg=15482.96, stdev=31234.11 00:27:50.475 clat (msec): min=85, max=2601, avg=1769.41, stdev=540.98 00:27:50.475 lat (msec): min=95, max=2688, avg=1784.90, stdev=541.82 00:27:50.475 clat percentiles (msec): 00:27:50.475 | 1.00th=[ 104], 5.00th=[ 418], 10.00th=[ 860], 20.00th=[ 1519], 00:27:50.475 | 30.00th=[ 1737], 40.00th=[ 1838], 50.00th=[ 1921], 60.00th=[ 1955], 00:27:50.475 | 70.00th=[ 2005], 80.00th=[ 2106], 90.00th=[ 2299], 95.00th=[ 2400], 00:27:50.475 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:27:50.475 | 99.99th=[ 2601] 00:27:50.475 bw ( KiB/s): min=32768, max=98107, per=1.35%, avg=66419.69, stdev=20676.70, samples=16 00:27:50.475 iops : min= 32, max= 95, avg=64.81, stdev=20.11, samples=16 00:27:50.475 lat (msec) : 100=0.77%, 250=1.55%, 500=3.56%, 750=3.10%, 1000=2.32% 00:27:50.475 lat (msec) : 2000=57.89%, >=2000=30.80% 00:27:50.475 cpu : usr=0.03%, sys=1.02%, ctx=936, majf=0, minf=32769 00:27:50.475 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:27:50.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.475 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.475 issued rwts: total=646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454442: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=51, BW=51.3MiB/s (53.8MB/s)(520MiB/10131msec) 00:27:50.476 slat (usec): min=33, max=187404, avg=19229.20, stdev=33802.13 00:27:50.476 clat (msec): min=129, max=3273, avg=2066.27, stdev=677.26 00:27:50.476 lat (msec): min=131, max=3275, avg=2085.50, stdev=678.42 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 220], 5.00th=[ 542], 10.00th=[ 1167], 20.00th=[ 1670], 00:27:50.476 | 30.00th=[ 1838], 40.00th=[ 1989], 50.00th=[ 2123], 60.00th=[ 2299], 00:27:50.476 | 70.00th=[ 2467], 80.00th=[ 2601], 90.00th=[ 2869], 95.00th=[ 3037], 00:27:50.476 | 99.00th=[ 3205], 99.50th=[ 3272], 99.90th=[ 3272], 99.95th=[ 3272], 00:27:50.476 | 99.99th=[ 3272] 00:27:50.476 bw ( KiB/s): min=16416, max=94208, per=1.17%, avg=57485.50, stdev=20248.50, samples=14 00:27:50.476 iops : min= 16, max= 92, avg=56.00, stdev=19.89, samples=14 00:27:50.476 lat (msec) : 250=1.35%, 500=2.69%, 750=1.73%, 1000=3.46%, 2000=31.15% 00:27:50.476 lat (msec) : >=2000=59.62% 00:27:50.476 cpu : usr=0.02%, sys=1.02%, ctx=952, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.2%, >=64=87.9% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.476 issued rwts: total=520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454443: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=54, BW=54.6MiB/s (57.2MB/s)(552MiB/10117msec) 00:27:50.476 slat (usec): min=31, max=222646, avg=18198.10, stdev=37494.05 00:27:50.476 clat (msec): min=69, max=3820, avg=2110.38, stdev=788.79 00:27:50.476 lat (msec): min=171, max=3821, avg=2128.57, stdev=789.06 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 205], 5.00th=[ 1045], 10.00th=[ 1401], 20.00th=[ 1469], 00:27:50.476 | 30.00th=[ 1687], 40.00th=[ 1888], 50.00th=[ 1955], 60.00th=[ 2039], 00:27:50.476 | 70.00th=[ 2333], 80.00th=[ 3037], 90.00th=[ 3339], 95.00th=[ 3540], 00:27:50.476 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:50.476 | 99.99th=[ 3809] 00:27:50.476 bw ( KiB/s): min=20480, max=96256, per=1.04%, avg=51072.94, stdev=27160.99, samples=17 00:27:50.476 iops : min= 20, max= 94, avg=49.76, stdev=26.63, samples=17 00:27:50.476 lat (msec) : 100=0.18%, 250=1.63%, 500=0.36%, 750=1.63%, 1000=0.36% 00:27:50.476 lat (msec) : 2000=51.99%, >=2000=43.84% 00:27:50.476 cpu : usr=0.00%, sys=1.03%, ctx=845, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.476 issued rwts: total=552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454444: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=53, BW=53.2MiB/s (55.8MB/s)(538MiB/10113msec) 00:27:50.476 slat (usec): min=46, max=161750, avg=18618.96, stdev=35777.43 00:27:50.476 clat (msec): min=93, max=3183, avg=2049.12, stdev=622.15 00:27:50.476 lat (msec): min=170, max=3192, avg=2067.74, stdev=622.79 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 215], 5.00th=[ 718], 10.00th=[ 1267], 20.00th=[ 1703], 00:27:50.476 | 30.00th=[ 1854], 40.00th=[ 2005], 50.00th=[ 2106], 60.00th=[ 2165], 00:27:50.476 | 70.00th=[ 2265], 80.00th=[ 2668], 90.00th=[ 2836], 95.00th=[ 3004], 00:27:50.476 | 99.00th=[ 3071], 99.50th=[ 3071], 99.90th=[ 3171], 99.95th=[ 3171], 00:27:50.476 | 99.99th=[ 3171] 00:27:50.476 bw ( KiB/s): min= 4096, max=98304, per=1.13%, avg=55971.80, stdev=23704.60, samples=15 00:27:50.476 iops : min= 4, max= 96, avg=54.60, stdev=23.16, samples=15 00:27:50.476 lat (msec) : 100=0.19%, 250=1.12%, 500=2.79%, 750=1.86%, 1000=1.67% 00:27:50.476 lat (msec) : 2000=31.97%, >=2000=60.41% 00:27:50.476 cpu : usr=0.01%, sys=0.99%, ctx=894, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.476 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454445: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=51, BW=51.7MiB/s (54.3MB/s)(523MiB/10107msec) 00:27:50.476 slat (usec): min=40, max=127524, avg=19152.94, stdev=21772.58 00:27:50.476 clat (msec): min=86, max=3065, avg=2110.86, stdev=741.43 00:27:50.476 lat (msec): min=109, max=3118, avg=2130.01, stdev=743.76 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 116], 5.00th=[ 693], 10.00th=[ 953], 20.00th=[ 1536], 00:27:50.476 | 30.00th=[ 1737], 40.00th=[ 2072], 50.00th=[ 2366], 60.00th=[ 2534], 00:27:50.476 | 70.00th=[ 2601], 80.00th=[ 2735], 90.00th=[ 2903], 95.00th=[ 2970], 00:27:50.476 | 99.00th=[ 3037], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3071], 00:27:50.476 | 99.99th=[ 3071] 00:27:50.476 bw ( KiB/s): min=24576, max=92160, per=1.09%, avg=53913.13, stdev=19254.75, samples=15 00:27:50.476 iops : min= 24, max= 90, avg=52.53, stdev=18.73, samples=15 00:27:50.476 lat (msec) : 100=0.19%, 250=2.10%, 500=1.53%, 750=3.25%, 1000=3.82% 00:27:50.476 lat (msec) : 2000=27.53%, >=2000=61.57% 00:27:50.476 cpu : usr=0.01%, sys=1.15%, ctx=922, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.476 issued rwts: total=523,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454446: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=53, BW=53.1MiB/s (55.7MB/s)(540MiB/10161msec) 00:27:50.476 slat (usec): min=39, max=164857, avg=18585.54, stdev=32883.52 00:27:50.476 clat (msec): min=121, max=3589, avg=2129.20, stdev=611.63 00:27:50.476 lat (msec): min=213, max=3590, avg=2147.78, stdev=609.07 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 510], 5.00th=[ 969], 10.00th=[ 1469], 20.00th=[ 1586], 00:27:50.476 | 30.00th=[ 1821], 40.00th=[ 2022], 50.00th=[ 2198], 60.00th=[ 2333], 00:27:50.476 | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2903], 95.00th=[ 3104], 00:27:50.476 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:27:50.476 | 99.99th=[ 3574] 00:27:50.476 bw ( KiB/s): min=18432, max=102400, per=1.07%, avg=52736.00, stdev=26200.53, samples=16 00:27:50.476 iops : min= 18, max= 100, avg=51.50, stdev=25.59, samples=16 00:27:50.476 lat (msec) : 250=0.56%, 500=0.37%, 750=1.30%, 1000=2.96%, 2000=32.41% 00:27:50.476 lat (msec) : >=2000=62.41% 00:27:50.476 cpu : usr=0.04%, sys=1.23%, ctx=961, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.476 issued rwts: total=540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454447: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=32, BW=32.3MiB/s (33.8MB/s)(328MiB/10166msec) 00:27:50.476 slat (usec): min=61, max=233101, avg=30619.39, stdev=40705.68 00:27:50.476 clat (msec): min=120, max=6773, avg=3296.25, stdev=2379.02 00:27:50.476 lat (msec): min=245, max=6793, avg=3326.87, stdev=2389.02 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 247], 5.00th=[ 262], 10.00th=[ 542], 20.00th=[ 844], 00:27:50.476 | 30.00th=[ 1099], 40.00th=[ 1938], 50.00th=[ 2400], 60.00th=[ 4329], 00:27:50.476 | 70.00th=[ 5805], 80.00th=[ 6141], 90.00th=[ 6477], 95.00th=[ 6544], 00:27:50.476 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:27:50.476 | 99.99th=[ 6745] 00:27:50.476 bw ( KiB/s): min=10240, max=106496, per=0.75%, avg=37236.36, stdev=33786.64, samples=11 00:27:50.476 iops : min= 10, max= 104, avg=36.36, stdev=32.99, samples=11 00:27:50.476 lat (msec) : 250=1.52%, 500=8.23%, 750=9.45%, 1000=9.76%, 2000=13.72% 00:27:50.476 lat (msec) : >=2000=57.32% 00:27:50.476 cpu : usr=0.00%, sys=1.14%, ctx=948, majf=0, minf=32769 00:27:50.476 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.8% 00:27:50.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.476 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:50.476 issued rwts: total=328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.476 job1: (groupid=0, jobs=1): err= 0: pid=1454448: Sun Jun 9 09:06:11 2024 00:27:50.476 read: IOPS=62, BW=62.5MiB/s (65.6MB/s)(633MiB/10125msec) 00:27:50.476 slat (usec): min=29, max=169713, avg=15806.14, stdev=30752.37 00:27:50.476 clat (msec): min=116, max=3201, avg=1898.62, stdev=682.50 00:27:50.476 lat (msec): min=127, max=3233, avg=1914.43, stdev=683.95 00:27:50.476 clat percentiles (msec): 00:27:50.476 | 1.00th=[ 140], 5.00th=[ 439], 10.00th=[ 1011], 20.00th=[ 1552], 00:27:50.476 | 30.00th=[ 1653], 40.00th=[ 1770], 50.00th=[ 1888], 60.00th=[ 1972], 00:27:50.476 | 70.00th=[ 2198], 80.00th=[ 2433], 90.00th=[ 2836], 95.00th=[ 2937], 00:27:50.476 | 99.00th=[ 3138], 99.50th=[ 3205], 99.90th=[ 3205], 99.95th=[ 3205], 00:27:50.476 | 99.99th=[ 3205] 00:27:50.476 bw ( KiB/s): min= 8192, max=110592, per=1.17%, avg=57561.78, stdev=31364.68, samples=18 00:27:50.476 iops : min= 8, max= 108, avg=56.17, stdev=30.58, samples=18 00:27:50.476 lat (msec) : 250=1.42%, 500=4.58%, 750=2.84%, 1000=1.11%, 2000=52.45% 00:27:50.476 lat (msec) : >=2000=37.60% 00:27:50.476 cpu : usr=0.02%, sys=1.20%, ctx=939, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.477 issued rwts: total=633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job1: (groupid=0, jobs=1): err= 0: pid=1454449: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=101, BW=101MiB/s (106MB/s)(1035MiB/10209msec) 00:27:50.477 slat (usec): min=45, max=180769, avg=9719.97, stdev=32850.22 00:27:50.477 clat (msec): min=145, max=1622, avg=1200.84, stdev=199.63 00:27:50.477 lat (msec): min=297, max=1623, avg=1210.56, stdev=200.35 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 309], 5.00th=[ 810], 10.00th=[ 1062], 20.00th=[ 1133], 00:27:50.477 | 30.00th=[ 1183], 40.00th=[ 1200], 50.00th=[ 1234], 60.00th=[ 1267], 00:27:50.477 | 70.00th=[ 1284], 80.00th=[ 1318], 90.00th=[ 1401], 95.00th=[ 1435], 00:27:50.477 | 99.00th=[ 1469], 99.50th=[ 1552], 99.90th=[ 1620], 99.95th=[ 1620], 00:27:50.477 | 99.99th=[ 1620] 00:27:50.477 bw ( KiB/s): min=88064, max=129024, per=2.09%, avg=103175.44, stdev=14412.49, samples=18 00:27:50.477 iops : min= 86, max= 126, avg=100.67, stdev=14.14, samples=18 00:27:50.477 lat (msec) : 250=0.10%, 500=2.90%, 750=1.55%, 1000=2.90%, 2000=92.56% 00:27:50.477 cpu : usr=0.04%, sys=1.43%, ctx=950, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.477 issued rwts: total=1035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job1: (groupid=0, jobs=1): err= 0: pid=1454450: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=64, BW=64.3MiB/s (67.4MB/s)(649MiB/10099msec) 00:27:50.477 slat (usec): min=39, max=234562, avg=15408.46, stdev=31233.87 00:27:50.477 clat (msec): min=95, max=2654, avg=1849.79, stdev=560.93 00:27:50.477 lat (msec): min=99, max=2725, avg=1865.20, stdev=561.85 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 105], 5.00th=[ 498], 10.00th=[ 735], 20.00th=[ 1754], 00:27:50.477 | 30.00th=[ 1838], 40.00th=[ 1921], 50.00th=[ 1989], 60.00th=[ 2089], 00:27:50.477 | 70.00th=[ 2165], 80.00th=[ 2198], 90.00th=[ 2333], 95.00th=[ 2467], 00:27:50.477 | 99.00th=[ 2567], 99.50th=[ 2601], 99.90th=[ 2668], 99.95th=[ 2668], 00:27:50.477 | 99.99th=[ 2668] 00:27:50.477 bw ( KiB/s): min=14336, max=112640, per=1.27%, avg=62878.18, stdev=26552.50, samples=17 00:27:50.477 iops : min= 14, max= 110, avg=61.35, stdev=25.93, samples=17 00:27:50.477 lat (msec) : 100=0.31%, 250=3.08%, 500=1.69%, 750=5.24%, 1000=0.92% 00:27:50.477 lat (msec) : 2000=39.91%, >=2000=48.84% 00:27:50.477 cpu : usr=0.01%, sys=1.17%, ctx=883, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.477 issued rwts: total=649,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454454: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=34, BW=34.5MiB/s (36.2MB/s)(349MiB/10104msec) 00:27:50.477 slat (usec): min=72, max=260510, avg=28748.70, stdev=47244.25 00:27:50.477 clat (msec): min=69, max=5819, avg=3242.48, stdev=1756.32 00:27:50.477 lat (msec): min=107, max=5824, avg=3271.23, stdev=1761.09 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 144], 5.00th=[ 313], 10.00th=[ 768], 20.00th=[ 1418], 00:27:50.477 | 30.00th=[ 2165], 40.00th=[ 2534], 50.00th=[ 3171], 60.00th=[ 4044], 00:27:50.477 | 70.00th=[ 4732], 80.00th=[ 5269], 90.00th=[ 5537], 95.00th=[ 5671], 00:27:50.477 | 99.00th=[ 5805], 99.50th=[ 5805], 99.90th=[ 5805], 99.95th=[ 5805], 00:27:50.477 | 99.99th=[ 5805] 00:27:50.477 bw ( KiB/s): min=10240, max=67449, per=0.61%, avg=30164.87, stdev=16732.56, samples=15 00:27:50.477 iops : min= 10, max= 65, avg=29.40, stdev=16.20, samples=15 00:27:50.477 lat (msec) : 100=0.29%, 250=4.01%, 500=1.72%, 750=3.72%, 1000=2.58% 00:27:50.477 lat (msec) : 2000=16.05%, >=2000=71.63% 00:27:50.477 cpu : usr=0.01%, sys=0.88%, ctx=910, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.2%, >=64=81.9% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:50.477 issued rwts: total=349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454455: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=57, BW=57.2MiB/s (59.9MB/s)(579MiB/10128msec) 00:27:50.477 slat (usec): min=34, max=271049, avg=17378.62, stdev=33495.08 00:27:50.477 clat (msec): min=62, max=2748, avg=1884.00, stdev=545.36 00:27:50.477 lat (msec): min=228, max=2748, avg=1901.38, stdev=544.77 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 401], 5.00th=[ 693], 10.00th=[ 1351], 20.00th=[ 1569], 00:27:50.477 | 30.00th=[ 1603], 40.00th=[ 1703], 50.00th=[ 1972], 60.00th=[ 2140], 00:27:50.477 | 70.00th=[ 2232], 80.00th=[ 2400], 90.00th=[ 2567], 95.00th=[ 2601], 00:27:50.477 | 99.00th=[ 2702], 99.50th=[ 2735], 99.90th=[ 2735], 99.95th=[ 2735], 00:27:50.477 | 99.99th=[ 2735] 00:27:50.477 bw ( KiB/s): min=20480, max=102400, per=1.25%, avg=61576.53, stdev=25666.81, samples=15 00:27:50.477 iops : min= 20, max= 100, avg=60.13, stdev=25.07, samples=15 00:27:50.477 lat (msec) : 100=0.17%, 250=0.17%, 500=0.86%, 750=5.01%, 1000=3.11% 00:27:50.477 lat (msec) : 2000=41.45%, >=2000=49.22% 00:27:50.477 cpu : usr=0.00%, sys=1.01%, ctx=882, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.477 issued rwts: total=579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454456: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=60, BW=60.7MiB/s (63.6MB/s)(614MiB/10123msec) 00:27:50.477 slat (usec): min=37, max=253420, avg=16297.55, stdev=34306.53 00:27:50.477 clat (msec): min=113, max=2416, avg=1862.57, stdev=467.56 00:27:50.477 lat (msec): min=242, max=2439, avg=1878.87, stdev=466.30 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 249], 5.00th=[ 760], 10.00th=[ 1284], 20.00th=[ 1603], 00:27:50.477 | 30.00th=[ 1720], 40.00th=[ 1838], 50.00th=[ 1989], 60.00th=[ 2106], 00:27:50.477 | 70.00th=[ 2198], 80.00th=[ 2232], 90.00th=[ 2299], 95.00th=[ 2333], 00:27:50.477 | 99.00th=[ 2400], 99.50th=[ 2400], 99.90th=[ 2433], 99.95th=[ 2433], 00:27:50.477 | 99.99th=[ 2433] 00:27:50.477 bw ( KiB/s): min=14307, max=124678, per=1.26%, avg=62318.56, stdev=29065.03, samples=16 00:27:50.477 iops : min= 13, max= 121, avg=60.75, stdev=28.38, samples=16 00:27:50.477 lat (msec) : 250=1.14%, 500=1.47%, 750=1.30%, 1000=2.93%, 2000=44.14% 00:27:50.477 lat (msec) : >=2000=49.02% 00:27:50.477 cpu : usr=0.00%, sys=1.12%, ctx=881, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.477 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454457: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=59, BW=59.9MiB/s (62.8MB/s)(607MiB/10134msec) 00:27:50.477 slat (usec): min=42, max=193549, avg=16472.85, stdev=34525.69 00:27:50.477 clat (msec): min=132, max=3010, avg=1881.83, stdev=722.45 00:27:50.477 lat (msec): min=142, max=3014, avg=1898.30, stdev=725.18 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 288], 5.00th=[ 485], 10.00th=[ 869], 20.00th=[ 1234], 00:27:50.477 | 30.00th=[ 1536], 40.00th=[ 1687], 50.00th=[ 1804], 60.00th=[ 2232], 00:27:50.477 | 70.00th=[ 2400], 80.00th=[ 2567], 90.00th=[ 2802], 95.00th=[ 2903], 00:27:50.477 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3004], 00:27:50.477 | 99.99th=[ 3004] 00:27:50.477 bw ( KiB/s): min=26624, max=124928, per=1.33%, avg=65534.07, stdev=29045.34, samples=15 00:27:50.477 iops : min= 26, max= 122, avg=63.93, stdev=28.42, samples=15 00:27:50.477 lat (msec) : 250=0.99%, 500=4.94%, 750=2.80%, 1000=2.31%, 2000=43.49% 00:27:50.477 lat (msec) : >=2000=45.47% 00:27:50.477 cpu : usr=0.02%, sys=1.07%, ctx=865, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.477 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454458: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=49, BW=49.8MiB/s (52.2MB/s)(506MiB/10159msec) 00:27:50.477 slat (usec): min=33, max=280601, avg=19840.60, stdev=33563.27 00:27:50.477 clat (msec): min=117, max=2993, avg=2203.70, stdev=556.32 00:27:50.477 lat (msec): min=159, max=3004, avg=2223.54, stdev=553.81 00:27:50.477 clat percentiles (msec): 00:27:50.477 | 1.00th=[ 262], 5.00th=[ 844], 10.00th=[ 1519], 20.00th=[ 1972], 00:27:50.477 | 30.00th=[ 2123], 40.00th=[ 2299], 50.00th=[ 2333], 60.00th=[ 2433], 00:27:50.477 | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2702], 95.00th=[ 2836], 00:27:50.477 | 99.00th=[ 2937], 99.50th=[ 2937], 99.90th=[ 3004], 99.95th=[ 3004], 00:27:50.477 | 99.99th=[ 3004] 00:27:50.477 bw ( KiB/s): min=10240, max=81920, per=1.05%, avg=51612.00, stdev=20351.40, samples=15 00:27:50.477 iops : min= 10, max= 80, avg=50.33, stdev=19.99, samples=15 00:27:50.477 lat (msec) : 250=0.99%, 500=2.17%, 750=1.19%, 1000=1.78%, 2000=16.01% 00:27:50.477 lat (msec) : >=2000=77.87% 00:27:50.477 cpu : usr=0.02%, sys=1.01%, ctx=907, majf=0, minf=32769 00:27:50.477 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.5% 00:27:50.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.477 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.477 issued rwts: total=506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.477 job2: (groupid=0, jobs=1): err= 0: pid=1454459: Sun Jun 9 09:06:11 2024 00:27:50.477 read: IOPS=50, BW=50.4MiB/s (52.9MB/s)(511MiB/10136msec) 00:27:50.477 slat (usec): min=30, max=339681, avg=19629.09, stdev=38639.68 00:27:50.477 clat (msec): min=102, max=3398, avg=2177.43, stdev=647.94 00:27:50.478 lat (msec): min=160, max=3404, avg=2197.06, stdev=647.95 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 255], 5.00th=[ 852], 10.00th=[ 1267], 20.00th=[ 1754], 00:27:50.478 | 30.00th=[ 2022], 40.00th=[ 2198], 50.00th=[ 2299], 60.00th=[ 2366], 00:27:50.478 | 70.00th=[ 2500], 80.00th=[ 2635], 90.00th=[ 2869], 95.00th=[ 3071], 00:27:50.478 | 99.00th=[ 3272], 99.50th=[ 3406], 99.90th=[ 3406], 99.95th=[ 3406], 00:27:50.478 | 99.99th=[ 3406] 00:27:50.478 bw ( KiB/s): min= 6131, max=94208, per=0.99%, avg=49019.56, stdev=24265.51, samples=16 00:27:50.478 iops : min= 5, max= 92, avg=47.75, stdev=23.87, samples=16 00:27:50.478 lat (msec) : 250=0.78%, 500=2.35%, 750=1.17%, 1000=3.91%, 2000=18.79% 00:27:50.478 lat (msec) : >=2000=72.99% 00:27:50.478 cpu : usr=0.04%, sys=0.90%, ctx=803, majf=0, minf=32769 00:27:50.478 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.7% 00:27:50.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.478 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.478 issued rwts: total=511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.478 job2: (groupid=0, jobs=1): err= 0: pid=1454460: Sun Jun 9 09:06:11 2024 00:27:50.478 read: IOPS=63, BW=63.8MiB/s (66.9MB/s)(647MiB/10146msec) 00:27:50.478 slat (usec): min=27, max=290061, avg=15567.47, stdev=34555.69 00:27:50.478 clat (msec): min=71, max=3151, avg=1847.26, stdev=683.83 00:27:50.478 lat (msec): min=204, max=3195, avg=1862.83, stdev=686.01 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 205], 5.00th=[ 489], 10.00th=[ 894], 20.00th=[ 1418], 00:27:50.478 | 30.00th=[ 1586], 40.00th=[ 1770], 50.00th=[ 1821], 60.00th=[ 1905], 00:27:50.478 | 70.00th=[ 2089], 80.00th=[ 2534], 90.00th=[ 2802], 95.00th=[ 2970], 00:27:50.478 | 99.00th=[ 3138], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:27:50.478 | 99.99th=[ 3138] 00:27:50.478 bw ( KiB/s): min=28672, max=94208, per=1.20%, avg=59034.22, stdev=20843.52, samples=18 00:27:50.478 iops : min= 28, max= 92, avg=57.50, stdev=20.44, samples=18 00:27:50.478 lat (msec) : 100=0.15%, 250=2.32%, 500=3.86%, 750=2.32%, 1000=2.01% 00:27:50.478 lat (msec) : 2000=54.71%, >=2000=34.62% 00:27:50.478 cpu : usr=0.01%, sys=1.20%, ctx=798, majf=0, minf=32769 00:27:50.478 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:27:50.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.478 issued rwts: total=647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.478 job2: (groupid=0, jobs=1): err= 0: pid=1454461: Sun Jun 9 09:06:11 2024 00:27:50.478 read: IOPS=53, BW=53.1MiB/s (55.7MB/s)(540MiB/10165msec) 00:27:50.478 slat (usec): min=32, max=319444, avg=18570.35, stdev=34068.47 00:27:50.478 clat (msec): min=134, max=3430, avg=2127.06, stdev=707.90 00:27:50.478 lat (msec): min=170, max=3455, avg=2145.63, stdev=709.11 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 271], 5.00th=[ 575], 10.00th=[ 1070], 20.00th=[ 1821], 00:27:50.478 | 30.00th=[ 1921], 40.00th=[ 2039], 50.00th=[ 2165], 60.00th=[ 2299], 00:27:50.478 | 70.00th=[ 2400], 80.00th=[ 2668], 90.00th=[ 3071], 95.00th=[ 3205], 00:27:50.478 | 99.00th=[ 3373], 99.50th=[ 3406], 99.90th=[ 3440], 99.95th=[ 3440], 00:27:50.478 | 99.99th=[ 3440] 00:27:50.478 bw ( KiB/s): min=14336, max=108544, per=1.14%, avg=56251.73, stdev=23906.98, samples=15 00:27:50.478 iops : min= 14, max= 106, avg=54.93, stdev=23.35, samples=15 00:27:50.478 lat (msec) : 250=0.37%, 500=2.96%, 750=4.26%, 1000=1.85%, 2000=26.67% 00:27:50.478 lat (msec) : >=2000=63.89% 00:27:50.478 cpu : usr=0.01%, sys=1.06%, ctx=840, majf=0, minf=32769 00:27:50.478 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:27:50.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.478 issued rwts: total=540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.478 job2: (groupid=0, jobs=1): err= 0: pid=1454462: Sun Jun 9 09:06:11 2024 00:27:50.478 read: IOPS=63, BW=63.3MiB/s (66.3MB/s)(642MiB/10148msec) 00:27:50.478 slat (usec): min=29, max=198857, avg=15628.21, stdev=29276.88 00:27:50.478 clat (msec): min=112, max=2783, avg=1801.40, stdev=539.51 00:27:50.478 lat (msec): min=163, max=2827, avg=1817.03, stdev=539.55 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 226], 5.00th=[ 835], 10.00th=[ 1183], 20.00th=[ 1385], 00:27:50.478 | 30.00th=[ 1502], 40.00th=[ 1720], 50.00th=[ 1821], 60.00th=[ 2022], 00:27:50.478 | 70.00th=[ 2165], 80.00th=[ 2265], 90.00th=[ 2366], 95.00th=[ 2567], 00:27:50.478 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2769], 99.95th=[ 2769], 00:27:50.478 | 99.99th=[ 2769] 00:27:50.478 bw ( KiB/s): min=28672, max=122880, per=1.33%, avg=65774.56, stdev=31711.86, samples=16 00:27:50.478 iops : min= 28, max= 120, avg=64.13, stdev=30.96, samples=16 00:27:50.478 lat (msec) : 250=1.40%, 500=2.02%, 750=0.47%, 1000=3.89%, 2000=50.47% 00:27:50.478 lat (msec) : >=2000=41.74% 00:27:50.478 cpu : usr=0.01%, sys=1.14%, ctx=957, majf=0, minf=32769 00:27:50.478 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:27:50.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.478 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.478 job2: (groupid=0, jobs=1): err= 0: pid=1454463: Sun Jun 9 09:06:11 2024 00:27:50.478 read: IOPS=63, BW=63.5MiB/s (66.6MB/s)(642MiB/10110msec) 00:27:50.478 slat (usec): min=42, max=208538, avg=15627.83, stdev=32054.71 00:27:50.478 clat (msec): min=73, max=3229, avg=1860.10, stdev=684.99 00:27:50.478 lat (msec): min=164, max=3231, avg=1875.73, stdev=684.91 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 182], 5.00th=[ 592], 10.00th=[ 1200], 20.00th=[ 1368], 00:27:50.478 | 30.00th=[ 1569], 40.00th=[ 1653], 50.00th=[ 1821], 60.00th=[ 2005], 00:27:50.478 | 70.00th=[ 2198], 80.00th=[ 2366], 90.00th=[ 2937], 95.00th=[ 3037], 00:27:50.478 | 99.00th=[ 3205], 99.50th=[ 3205], 99.90th=[ 3239], 99.95th=[ 3239], 00:27:50.478 | 99.99th=[ 3239] 00:27:50.478 bw ( KiB/s): min=18395, max=126976, per=1.26%, avg=61907.88, stdev=28546.56, samples=17 00:27:50.478 iops : min= 17, max= 124, avg=60.29, stdev=28.02, samples=17 00:27:50.478 lat (msec) : 100=0.16%, 250=2.65%, 500=2.02%, 750=1.71%, 1000=2.80% 00:27:50.478 lat (msec) : 2000=50.47%, >=2000=40.19% 00:27:50.478 cpu : usr=0.00%, sys=1.24%, ctx=906, majf=0, minf=32769 00:27:50.478 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:27:50.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.478 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.478 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.478 job2: (groupid=0, jobs=1): err= 0: pid=1454464: Sun Jun 9 09:06:11 2024 00:27:50.478 read: IOPS=95, BW=95.8MiB/s (100MB/s)(973MiB/10153msec) 00:27:50.478 slat (usec): min=72, max=143866, avg=10277.58, stdev=20006.95 00:27:50.478 clat (msec): min=144, max=1547, avg=1281.00, stdev=249.92 00:27:50.478 lat (msec): min=213, max=1556, avg=1291.28, stdev=250.83 00:27:50.478 clat percentiles (msec): 00:27:50.478 | 1.00th=[ 245], 5.00th=[ 667], 10.00th=[ 1036], 20.00th=[ 1234], 00:27:50.478 | 30.00th=[ 1284], 40.00th=[ 1318], 50.00th=[ 1334], 60.00th=[ 1368], 00:27:50.478 | 70.00th=[ 1401], 80.00th=[ 1435], 90.00th=[ 1469], 95.00th=[ 1485], 00:27:50.478 | 99.00th=[ 1519], 99.50th=[ 1519], 99.90th=[ 1552], 99.95th=[ 1552], 00:27:50.478 | 99.99th=[ 1552] 00:27:50.479 bw ( KiB/s): min= 8192, max=108544, per=1.85%, avg=91191.37, stdev=21262.95, samples=19 00:27:50.479 iops : min= 8, max= 106, avg=88.95, stdev=20.74, samples=19 00:27:50.479 lat (msec) : 250=1.03%, 500=2.26%, 750=3.08%, 1000=2.67%, 2000=90.96% 00:27:50.479 cpu : usr=0.10%, sys=2.01%, ctx=866, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.479 issued rwts: total=973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job2: (groupid=0, jobs=1): err= 0: pid=1454465: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=63, BW=63.9MiB/s (67.0MB/s)(647MiB/10129msec) 00:27:50.479 slat (usec): min=28, max=235380, avg=15479.94, stdev=31497.50 00:27:50.479 clat (msec): min=109, max=2477, avg=1722.48, stdev=535.18 00:27:50.479 lat (msec): min=129, max=2516, avg=1737.96, stdev=536.41 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 241], 5.00th=[ 542], 10.00th=[ 919], 20.00th=[ 1418], 00:27:50.479 | 30.00th=[ 1502], 40.00th=[ 1569], 50.00th=[ 1838], 60.00th=[ 2005], 00:27:50.479 | 70.00th=[ 2089], 80.00th=[ 2198], 90.00th=[ 2299], 95.00th=[ 2366], 00:27:50.479 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2467], 00:27:50.479 | 99.99th=[ 2467] 00:27:50.479 bw ( KiB/s): min=32702, max=94208, per=1.35%, avg=66427.88, stdev=19919.35, samples=16 00:27:50.479 iops : min= 31, max= 92, avg=64.81, stdev=19.56, samples=16 00:27:50.479 lat (msec) : 250=1.55%, 500=3.40%, 750=2.63%, 1000=2.94%, 2000=49.15% 00:27:50.479 lat (msec) : >=2000=40.34% 00:27:50.479 cpu : usr=0.00%, sys=1.16%, ctx=872, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.479 issued rwts: total=647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job2: (groupid=0, jobs=1): err= 0: pid=1454466: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=45, BW=45.1MiB/s (47.3MB/s)(459MiB/10174msec) 00:27:50.479 slat (usec): min=34, max=278590, avg=21841.81, stdev=34854.03 00:27:50.479 clat (msec): min=146, max=3349, avg=2433.86, stdev=595.36 00:27:50.479 lat (msec): min=312, max=3351, avg=2455.70, stdev=593.41 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 542], 5.00th=[ 1284], 10.00th=[ 1536], 20.00th=[ 2056], 00:27:50.479 | 30.00th=[ 2232], 40.00th=[ 2400], 50.00th=[ 2500], 60.00th=[ 2668], 00:27:50.479 | 70.00th=[ 2836], 80.00th=[ 2937], 90.00th=[ 3071], 95.00th=[ 3205], 00:27:50.479 | 99.00th=[ 3306], 99.50th=[ 3339], 99.90th=[ 3339], 99.95th=[ 3339], 00:27:50.479 | 99.99th=[ 3339] 00:27:50.479 bw ( KiB/s): min=16384, max=96256, per=0.92%, avg=45192.53, stdev=22864.17, samples=15 00:27:50.479 iops : min= 16, max= 94, avg=44.13, stdev=22.33, samples=15 00:27:50.479 lat (msec) : 250=0.22%, 500=0.65%, 750=1.74%, 1000=0.65%, 2000=10.68% 00:27:50.479 lat (msec) : >=2000=86.06% 00:27:50.479 cpu : usr=0.01%, sys=1.05%, ctx=912, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.479 issued rwts: total=459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job3: (groupid=0, jobs=1): err= 0: pid=1454480: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=79, BW=79.1MiB/s (82.9MB/s)(799MiB/10105msec) 00:27:50.479 slat (usec): min=30, max=301829, avg=12515.50, stdev=32072.26 00:27:50.479 clat (msec): min=102, max=2420, avg=1457.23, stdev=333.87 00:27:50.479 lat (msec): min=213, max=2420, avg=1469.75, stdev=332.48 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 228], 5.00th=[ 1150], 10.00th=[ 1183], 20.00th=[ 1250], 00:27:50.479 | 30.00th=[ 1267], 40.00th=[ 1301], 50.00th=[ 1351], 60.00th=[ 1435], 00:27:50.479 | 70.00th=[ 1603], 80.00th=[ 1754], 90.00th=[ 1888], 95.00th=[ 2123], 00:27:50.479 | 99.00th=[ 2333], 99.50th=[ 2433], 99.90th=[ 2433], 99.95th=[ 2433], 00:27:50.479 | 99.99th=[ 2433] 00:27:50.479 bw ( KiB/s): min=20480, max=120832, per=1.64%, avg=80915.29, stdev=32392.82, samples=17 00:27:50.479 iops : min= 20, max= 118, avg=78.82, stdev=31.60, samples=17 00:27:50.479 lat (msec) : 250=1.13%, 750=0.75%, 1000=0.63%, 2000=91.36%, >=2000=6.13% 00:27:50.479 cpu : usr=0.02%, sys=1.12%, ctx=910, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.479 issued rwts: total=799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job3: (groupid=0, jobs=1): err= 0: pid=1454481: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=66, BW=66.3MiB/s (69.5MB/s)(676MiB/10196msec) 00:27:50.479 slat (usec): min=41, max=197657, avg=14896.14, stdev=28096.93 00:27:50.479 clat (msec): min=122, max=2610, avg=1826.37, stdev=534.96 00:27:50.479 lat (msec): min=198, max=2611, avg=1841.27, stdev=535.80 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 418], 5.00th=[ 592], 10.00th=[ 1234], 20.00th=[ 1452], 00:27:50.479 | 30.00th=[ 1586], 40.00th=[ 1703], 50.00th=[ 1955], 60.00th=[ 2072], 00:27:50.479 | 70.00th=[ 2165], 80.00th=[ 2333], 90.00th=[ 2467], 95.00th=[ 2500], 00:27:50.479 | 99.00th=[ 2601], 99.50th=[ 2601], 99.90th=[ 2601], 99.95th=[ 2601], 00:27:50.479 | 99.99th=[ 2601] 00:27:50.479 bw ( KiB/s): min=30720, max=96256, per=1.26%, avg=62350.22, stdev=17522.39, samples=18 00:27:50.479 iops : min= 30, max= 94, avg=60.89, stdev=17.11, samples=18 00:27:50.479 lat (msec) : 250=0.89%, 500=2.51%, 750=3.11%, 1000=1.78%, 2000=44.97% 00:27:50.479 lat (msec) : >=2000=46.75% 00:27:50.479 cpu : usr=0.02%, sys=1.29%, ctx=967, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.7%, >=64=90.7% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.479 issued rwts: total=676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job3: (groupid=0, jobs=1): err= 0: pid=1454483: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=50, BW=50.1MiB/s (52.6MB/s)(509MiB/10150msec) 00:27:50.479 slat (usec): min=37, max=169475, avg=19737.23, stdev=33036.88 00:27:50.479 clat (msec): min=101, max=2875, avg=2204.12, stdev=657.24 00:27:50.479 lat (msec): min=161, max=2956, avg=2223.85, stdev=657.37 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 186], 5.00th=[ 550], 10.00th=[ 1003], 20.00th=[ 1955], 00:27:50.479 | 30.00th=[ 2232], 40.00th=[ 2333], 50.00th=[ 2467], 60.00th=[ 2534], 00:27:50.479 | 70.00th=[ 2567], 80.00th=[ 2635], 90.00th=[ 2735], 95.00th=[ 2802], 00:27:50.479 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 2869], 99.95th=[ 2869], 00:27:50.479 | 99.99th=[ 2869] 00:27:50.479 bw ( KiB/s): min=40878, max=100352, per=1.05%, avg=52013.73, stdev=14625.99, samples=15 00:27:50.479 iops : min= 39, max= 98, avg=50.73, stdev=14.34, samples=15 00:27:50.479 lat (msec) : 250=2.75%, 500=1.77%, 750=2.36%, 1000=2.75%, 2000=10.41% 00:27:50.479 lat (msec) : >=2000=79.96% 00:27:50.479 cpu : usr=0.03%, sys=1.02%, ctx=888, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.479 issued rwts: total=509,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job3: (groupid=0, jobs=1): err= 0: pid=1454484: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=79, BW=79.4MiB/s (83.3MB/s)(808MiB/10174msec) 00:27:50.479 slat (usec): min=52, max=72587, avg=12402.46, stdev=13667.37 00:27:50.479 clat (msec): min=145, max=2224, avg=1496.55, stdev=386.44 00:27:50.479 lat (msec): min=199, max=2250, avg=1508.96, stdev=387.51 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 284], 5.00th=[ 634], 10.00th=[ 1028], 20.00th=[ 1284], 00:27:50.479 | 30.00th=[ 1401], 40.00th=[ 1469], 50.00th=[ 1519], 60.00th=[ 1586], 00:27:50.479 | 70.00th=[ 1653], 80.00th=[ 1770], 90.00th=[ 1938], 95.00th=[ 2056], 00:27:50.479 | 99.00th=[ 2198], 99.50th=[ 2198], 99.90th=[ 2232], 99.95th=[ 2232], 00:27:50.479 | 99.99th=[ 2232] 00:27:50.479 bw ( KiB/s): min=45056, max=102400, per=1.57%, avg=77368.89, stdev=19436.09, samples=18 00:27:50.479 iops : min= 44, max= 100, avg=75.56, stdev=18.98, samples=18 00:27:50.479 lat (msec) : 250=0.62%, 500=3.34%, 750=2.48%, 1000=3.47%, 2000=83.91% 00:27:50.479 lat (msec) : >=2000=6.19% 00:27:50.479 cpu : usr=0.05%, sys=1.67%, ctx=994, majf=0, minf=32769 00:27:50.479 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:27:50.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.479 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.479 issued rwts: total=808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.479 job3: (groupid=0, jobs=1): err= 0: pid=1454485: Sun Jun 9 09:06:11 2024 00:27:50.479 read: IOPS=65, BW=65.7MiB/s (68.9MB/s)(667MiB/10145msec) 00:27:50.479 slat (usec): min=49, max=159088, avg=14994.26, stdev=22313.80 00:27:50.479 clat (msec): min=138, max=2507, avg=1805.73, stdev=463.07 00:27:50.479 lat (msec): min=235, max=2516, avg=1820.73, stdev=462.68 00:27:50.479 clat percentiles (msec): 00:27:50.479 | 1.00th=[ 368], 5.00th=[ 877], 10.00th=[ 1217], 20.00th=[ 1401], 00:27:50.479 | 30.00th=[ 1586], 40.00th=[ 1854], 50.00th=[ 1972], 60.00th=[ 2039], 00:27:50.479 | 70.00th=[ 2123], 80.00th=[ 2165], 90.00th=[ 2232], 95.00th=[ 2366], 00:27:50.479 | 99.00th=[ 2500], 99.50th=[ 2500], 99.90th=[ 2500], 99.95th=[ 2500], 00:27:50.479 | 99.99th=[ 2500] 00:27:50.479 bw ( KiB/s): min= 8192, max=108544, per=1.25%, avg=61434.50, stdev=25660.79, samples=18 00:27:50.480 iops : min= 8, max= 106, avg=59.89, stdev=25.02, samples=18 00:27:50.480 lat (msec) : 250=0.30%, 500=1.50%, 750=2.55%, 1000=2.25%, 2000=48.13% 00:27:50.480 lat (msec) : >=2000=45.28% 00:27:50.480 cpu : usr=0.05%, sys=1.59%, ctx=919, majf=0, minf=32769 00:27:50.480 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.480 issued rwts: total=667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454486: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=56, BW=56.6MiB/s (59.3MB/s)(575MiB/10160msec) 00:27:50.480 slat (usec): min=40, max=171264, avg=17389.42, stdev=25551.14 00:27:50.480 clat (msec): min=157, max=3258, avg=2021.89, stdev=803.57 00:27:50.480 lat (msec): min=159, max=3275, avg=2039.28, stdev=806.94 00:27:50.480 clat percentiles (msec): 00:27:50.480 | 1.00th=[ 163], 5.00th=[ 485], 10.00th=[ 667], 20.00th=[ 1401], 00:27:50.480 | 30.00th=[ 1838], 40.00th=[ 1938], 50.00th=[ 2056], 60.00th=[ 2232], 00:27:50.480 | 70.00th=[ 2500], 80.00th=[ 2836], 90.00th=[ 3037], 95.00th=[ 3138], 00:27:50.480 | 99.00th=[ 3239], 99.50th=[ 3239], 99.90th=[ 3272], 99.95th=[ 3272], 00:27:50.480 | 99.99th=[ 3272] 00:27:50.480 bw ( KiB/s): min=10240, max=102400, per=1.16%, avg=57328.62, stdev=25537.82, samples=16 00:27:50.480 iops : min= 10, max= 100, avg=55.87, stdev=24.93, samples=16 00:27:50.480 lat (msec) : 250=1.39%, 500=5.04%, 750=5.04%, 1000=1.39%, 2000=32.87% 00:27:50.480 lat (msec) : >=2000=54.26% 00:27:50.480 cpu : usr=0.06%, sys=1.34%, ctx=954, majf=0, minf=32053 00:27:50.480 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.480 issued rwts: total=575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454487: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=84, BW=84.4MiB/s (88.5MB/s)(861MiB/10199msec) 00:27:50.480 slat (usec): min=33, max=171050, avg=11682.04, stdev=33137.46 00:27:50.480 clat (msec): min=137, max=1925, avg=1415.18, stdev=279.89 00:27:50.480 lat (msec): min=253, max=1931, avg=1426.86, stdev=279.02 00:27:50.480 clat percentiles (msec): 00:27:50.480 | 1.00th=[ 264], 5.00th=[ 927], 10.00th=[ 1234], 20.00th=[ 1301], 00:27:50.480 | 30.00th=[ 1368], 40.00th=[ 1418], 50.00th=[ 1435], 60.00th=[ 1469], 00:27:50.480 | 70.00th=[ 1536], 80.00th=[ 1620], 90.00th=[ 1703], 95.00th=[ 1804], 00:27:50.480 | 99.00th=[ 1888], 99.50th=[ 1905], 99.90th=[ 1921], 99.95th=[ 1921], 00:27:50.480 | 99.99th=[ 1921] 00:27:50.480 bw ( KiB/s): min=51200, max=122880, per=1.79%, avg=88304.94, stdev=15493.94, samples=17 00:27:50.480 iops : min= 50, max= 120, avg=86.24, stdev=15.13, samples=17 00:27:50.480 lat (msec) : 250=0.12%, 500=2.56%, 750=1.39%, 1000=2.32%, 2000=93.61% 00:27:50.480 cpu : usr=0.03%, sys=1.36%, ctx=868, majf=0, minf=32769 00:27:50.480 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.480 issued rwts: total=861,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454488: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=76, BW=76.1MiB/s (79.8MB/s)(771MiB/10137msec) 00:27:50.480 slat (usec): min=43, max=170307, avg=13045.87, stdev=26507.23 00:27:50.480 clat (msec): min=74, max=2346, avg=1501.73, stdev=517.41 00:27:50.480 lat (msec): min=145, max=2348, avg=1514.78, stdev=519.54 00:27:50.480 clat percentiles (msec): 00:27:50.480 | 1.00th=[ 153], 5.00th=[ 439], 10.00th=[ 785], 20.00th=[ 1183], 00:27:50.480 | 30.00th=[ 1250], 40.00th=[ 1334], 50.00th=[ 1519], 60.00th=[ 1720], 00:27:50.480 | 70.00th=[ 1804], 80.00th=[ 1955], 90.00th=[ 2232], 95.00th=[ 2265], 00:27:50.480 | 99.00th=[ 2299], 99.50th=[ 2333], 99.90th=[ 2333], 99.95th=[ 2333], 00:27:50.480 | 99.99th=[ 2333] 00:27:50.480 bw ( KiB/s): min=30720, max=124928, per=1.67%, avg=82279.12, stdev=26643.19, samples=16 00:27:50.480 iops : min= 30, max= 122, avg=80.19, stdev=26.13, samples=16 00:27:50.480 lat (msec) : 100=0.13%, 250=1.95%, 500=4.02%, 750=3.76%, 1000=2.33% 00:27:50.480 lat (msec) : 2000=70.04%, >=2000=17.77% 00:27:50.480 cpu : usr=0.04%, sys=1.34%, ctx=876, majf=0, minf=32769 00:27:50.480 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.480 issued rwts: total=771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454489: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=84, BW=84.9MiB/s (89.0MB/s)(863MiB/10163msec) 00:27:50.480 slat (usec): min=50, max=134736, avg=11601.09, stdev=26700.35 00:27:50.480 clat (msec): min=146, max=2209, avg=1424.49, stdev=370.98 00:27:50.480 lat (msec): min=245, max=2221, avg=1436.09, stdev=372.24 00:27:50.480 clat percentiles (msec): 00:27:50.480 | 1.00th=[ 279], 5.00th=[ 634], 10.00th=[ 1099], 20.00th=[ 1250], 00:27:50.480 | 30.00th=[ 1301], 40.00th=[ 1334], 50.00th=[ 1401], 60.00th=[ 1485], 00:27:50.480 | 70.00th=[ 1552], 80.00th=[ 1670], 90.00th=[ 1955], 95.00th=[ 2072], 00:27:50.480 | 99.00th=[ 2106], 99.50th=[ 2123], 99.90th=[ 2198], 99.95th=[ 2198], 00:27:50.480 | 99.99th=[ 2198] 00:27:50.480 bw ( KiB/s): min=40960, max=116736, per=1.69%, avg=83594.94, stdev=21795.75, samples=18 00:27:50.480 iops : min= 40, max= 114, avg=81.50, stdev=21.21, samples=18 00:27:50.480 lat (msec) : 250=0.23%, 500=3.59%, 750=1.85%, 1000=3.48%, 2000=83.78% 00:27:50.480 lat (msec) : >=2000=7.07% 00:27:50.480 cpu : usr=0.01%, sys=1.47%, ctx=929, majf=0, minf=32769 00:27:50.480 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.7%, >=64=92.7% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.480 issued rwts: total=863,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454490: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=72, BW=72.2MiB/s (75.7MB/s)(733MiB/10148msec) 00:27:50.480 slat (usec): min=35, max=148322, avg=13641.09, stdev=29483.04 00:27:50.480 clat (msec): min=145, max=2383, avg=1591.55, stdev=505.68 00:27:50.480 lat (msec): min=147, max=2389, avg=1605.19, stdev=507.06 00:27:50.480 clat percentiles (msec): 00:27:50.480 | 1.00th=[ 155], 5.00th=[ 447], 10.00th=[ 743], 20.00th=[ 1267], 00:27:50.480 | 30.00th=[ 1351], 40.00th=[ 1603], 50.00th=[ 1754], 60.00th=[ 1838], 00:27:50.480 | 70.00th=[ 1921], 80.00th=[ 1989], 90.00th=[ 2106], 95.00th=[ 2198], 00:27:50.480 | 99.00th=[ 2366], 99.50th=[ 2366], 99.90th=[ 2400], 99.95th=[ 2400], 00:27:50.480 | 99.99th=[ 2400] 00:27:50.480 bw ( KiB/s): min=49152, max=130810, per=1.57%, avg=77511.56, stdev=24450.79, samples=16 00:27:50.480 iops : min= 48, max= 127, avg=75.44, stdev=23.75, samples=16 00:27:50.480 lat (msec) : 250=1.77%, 500=4.23%, 750=4.23%, 1000=2.32%, 2000=70.94% 00:27:50.480 lat (msec) : >=2000=16.51% 00:27:50.480 cpu : usr=0.01%, sys=1.28%, ctx=957, majf=0, minf=32769 00:27:50.480 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:27:50.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.480 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.480 issued rwts: total=733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.480 job3: (groupid=0, jobs=1): err= 0: pid=1454491: Sun Jun 9 09:06:11 2024 00:27:50.480 read: IOPS=66, BW=66.2MiB/s (69.4MB/s)(670MiB/10125msec) 00:27:50.480 slat (usec): min=35, max=227515, avg=14934.24, stdev=30425.69 00:27:50.480 clat (msec): min=116, max=2450, avg=1757.49, stdev=478.05 00:27:50.480 lat (msec): min=154, max=2455, avg=1772.42, stdev=478.84 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 259], 5.00th=[ 535], 10.00th=[ 1150], 20.00th=[ 1552], 00:27:50.481 | 30.00th=[ 1653], 40.00th=[ 1737], 50.00th=[ 1854], 60.00th=[ 1938], 00:27:50.481 | 70.00th=[ 2039], 80.00th=[ 2123], 90.00th=[ 2232], 95.00th=[ 2333], 00:27:50.481 | 99.00th=[ 2400], 99.50th=[ 2433], 99.90th=[ 2467], 99.95th=[ 2467], 00:27:50.481 | 99.99th=[ 2467] 00:27:50.481 bw ( KiB/s): min= 2048, max=102400, per=1.33%, avg=65401.76, stdev=24111.43, samples=17 00:27:50.481 iops : min= 2, max= 100, avg=63.76, stdev=23.58, samples=17 00:27:50.481 lat (msec) : 250=0.90%, 500=4.03%, 750=1.34%, 1000=1.94%, 2000=58.81% 00:27:50.481 lat (msec) : >=2000=32.99% 00:27:50.481 cpu : usr=0.04%, sys=1.06%, ctx=866, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job3: (groupid=0, jobs=1): err= 0: pid=1454492: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=77, BW=78.0MiB/s (81.8MB/s)(786MiB/10079msec) 00:27:50.481 slat (usec): min=28, max=179992, avg=12778.16, stdev=30261.29 00:27:50.481 clat (msec): min=31, max=2267, avg=1453.02, stdev=411.67 00:27:50.481 lat (msec): min=82, max=2270, avg=1465.80, stdev=412.12 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 83], 5.00th=[ 485], 10.00th=[ 1070], 20.00th=[ 1284], 00:27:50.481 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1485], 60.00th=[ 1552], 00:27:50.481 | 70.00th=[ 1603], 80.00th=[ 1720], 90.00th=[ 1938], 95.00th=[ 2089], 00:27:50.481 | 99.00th=[ 2165], 99.50th=[ 2265], 99.90th=[ 2265], 99.95th=[ 2265], 00:27:50.481 | 99.99th=[ 2265] 00:27:50.481 bw ( KiB/s): min=30720, max=122634, per=1.61%, avg=79255.18, stdev=23598.85, samples=17 00:27:50.481 iops : min= 30, max= 119, avg=77.35, stdev=22.96, samples=17 00:27:50.481 lat (msec) : 50=0.13%, 100=1.53%, 250=0.64%, 500=3.18%, 750=2.04% 00:27:50.481 lat (msec) : 1000=2.42%, 2000=83.33%, >=2000=6.74% 00:27:50.481 cpu : usr=0.02%, sys=1.24%, ctx=905, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=786,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job3: (groupid=0, jobs=1): err= 0: pid=1454493: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=59, BW=60.0MiB/s (62.9MB/s)(607MiB/10118msec) 00:27:50.481 slat (usec): min=31, max=197082, avg=16508.54, stdev=33849.94 00:27:50.481 clat (msec): min=94, max=3446, avg=1958.37, stdev=799.30 00:27:50.481 lat (msec): min=124, max=3474, avg=1974.88, stdev=801.76 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 140], 5.00th=[ 384], 10.00th=[ 944], 20.00th=[ 1502], 00:27:50.481 | 30.00th=[ 1636], 40.00th=[ 1770], 50.00th=[ 1888], 60.00th=[ 2005], 00:27:50.481 | 70.00th=[ 2198], 80.00th=[ 2869], 90.00th=[ 3171], 95.00th=[ 3306], 00:27:50.481 | 99.00th=[ 3373], 99.50th=[ 3406], 99.90th=[ 3440], 99.95th=[ 3440], 00:27:50.481 | 99.99th=[ 3440] 00:27:50.481 bw ( KiB/s): min= 2048, max=100553, per=1.11%, avg=54517.22, stdev=30101.67, samples=18 00:27:50.481 iops : min= 2, max= 98, avg=53.11, stdev=29.31, samples=18 00:27:50.481 lat (msec) : 100=0.16%, 250=2.47%, 500=3.46%, 750=2.47%, 1000=1.81% 00:27:50.481 lat (msec) : 2000=49.42%, >=2000=40.20% 00:27:50.481 cpu : usr=0.03%, sys=0.97%, ctx=872, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job4: (groupid=0, jobs=1): err= 0: pid=1454503: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=70, BW=70.7MiB/s (74.2MB/s)(718MiB/10151msec) 00:27:50.481 slat (usec): min=41, max=134129, avg=13936.22, stdev=23288.22 00:27:50.481 clat (msec): min=141, max=2479, avg=1735.41, stdev=504.12 00:27:50.481 lat (msec): min=175, max=2493, avg=1749.34, stdev=505.65 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 284], 5.00th=[ 944], 10.00th=[ 1133], 20.00th=[ 1234], 00:27:50.481 | 30.00th=[ 1334], 40.00th=[ 1720], 50.00th=[ 1854], 60.00th=[ 1938], 00:27:50.481 | 70.00th=[ 2039], 80.00th=[ 2232], 90.00th=[ 2333], 95.00th=[ 2400], 00:27:50.481 | 99.00th=[ 2467], 99.50th=[ 2467], 99.90th=[ 2467], 99.95th=[ 2467], 00:27:50.481 | 99.99th=[ 2467] 00:27:50.481 bw ( KiB/s): min=20480, max=106283, per=1.29%, avg=63685.05, stdev=23693.40, samples=19 00:27:50.481 iops : min= 20, max= 103, avg=62.05, stdev=23.10, samples=19 00:27:50.481 lat (msec) : 250=0.42%, 500=2.37%, 750=0.97%, 1000=2.92%, 2000=60.72% 00:27:50.481 lat (msec) : >=2000=32.59% 00:27:50.481 cpu : usr=0.03%, sys=1.53%, ctx=994, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job4: (groupid=0, jobs=1): err= 0: pid=1454504: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=56, BW=56.9MiB/s (59.7MB/s)(577MiB/10142msec) 00:27:50.481 slat (usec): min=43, max=96828, avg=17360.30, stdev=18467.70 00:27:50.481 clat (msec): min=121, max=2699, avg=2026.65, stdev=517.75 00:27:50.481 lat (msec): min=146, max=2701, avg=2044.01, stdev=517.08 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 226], 5.00th=[ 625], 10.00th=[ 1267], 20.00th=[ 1938], 00:27:50.481 | 30.00th=[ 1989], 40.00th=[ 2056], 50.00th=[ 2140], 60.00th=[ 2198], 00:27:50.481 | 70.00th=[ 2299], 80.00th=[ 2366], 90.00th=[ 2467], 95.00th=[ 2567], 00:27:50.481 | 99.00th=[ 2635], 99.50th=[ 2668], 99.90th=[ 2702], 99.95th=[ 2702], 00:27:50.481 | 99.99th=[ 2702] 00:27:50.481 bw ( KiB/s): min= 4087, max=88064, per=1.10%, avg=54079.18, stdev=17262.58, samples=17 00:27:50.481 iops : min= 3, max= 86, avg=52.65, stdev=17.07, samples=17 00:27:50.481 lat (msec) : 250=1.04%, 500=2.25%, 750=2.43%, 1000=2.08%, 2000=23.40% 00:27:50.481 lat (msec) : >=2000=68.80% 00:27:50.481 cpu : usr=0.07%, sys=1.16%, ctx=979, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.5%, >=64=89.1% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job4: (groupid=0, jobs=1): err= 0: pid=1454505: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=67, BW=67.7MiB/s (71.0MB/s)(687MiB/10146msec) 00:27:50.481 slat (usec): min=28, max=155028, avg=14558.70, stdev=28348.12 00:27:50.481 clat (msec): min=140, max=2808, avg=1808.09, stdev=563.85 00:27:50.481 lat (msec): min=170, max=2862, avg=1822.65, stdev=565.53 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 279], 5.00th=[ 659], 10.00th=[ 1083], 20.00th=[ 1368], 00:27:50.481 | 30.00th=[ 1603], 40.00th=[ 1703], 50.00th=[ 1787], 60.00th=[ 1972], 00:27:50.481 | 70.00th=[ 2123], 80.00th=[ 2299], 90.00th=[ 2567], 95.00th=[ 2702], 00:27:50.481 | 99.00th=[ 2769], 99.50th=[ 2802], 99.90th=[ 2802], 99.95th=[ 2802], 00:27:50.481 | 99.99th=[ 2802] 00:27:50.481 bw ( KiB/s): min=30720, max=108544, per=1.29%, avg=63716.44, stdev=21502.20, samples=18 00:27:50.481 iops : min= 30, max= 106, avg=62.11, stdev=21.06, samples=18 00:27:50.481 lat (msec) : 250=0.44%, 500=2.77%, 750=1.89%, 1000=2.62%, 2000=54.59% 00:27:50.481 lat (msec) : >=2000=37.70% 00:27:50.481 cpu : usr=0.04%, sys=1.25%, ctx=957, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job4: (groupid=0, jobs=1): err= 0: pid=1454506: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=54, BW=54.4MiB/s (57.0MB/s)(550MiB/10109msec) 00:27:50.481 slat (usec): min=30, max=190591, avg=18263.73, stdev=35871.01 00:27:50.481 clat (msec): min=61, max=3320, avg=2020.04, stdev=806.78 00:27:50.481 lat (msec): min=190, max=3334, avg=2038.30, stdev=808.13 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 194], 5.00th=[ 347], 10.00th=[ 953], 20.00th=[ 1385], 00:27:50.481 | 30.00th=[ 1452], 40.00th=[ 1888], 50.00th=[ 2198], 60.00th=[ 2299], 00:27:50.481 | 70.00th=[ 2534], 80.00th=[ 2735], 90.00th=[ 3037], 95.00th=[ 3205], 00:27:50.481 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 3306], 99.95th=[ 3306], 00:27:50.481 | 99.99th=[ 3306] 00:27:50.481 bw ( KiB/s): min= 4087, max=96256, per=1.10%, avg=54015.44, stdev=29847.28, samples=16 00:27:50.481 iops : min= 3, max= 94, avg=52.69, stdev=29.26, samples=16 00:27:50.481 lat (msec) : 100=0.18%, 250=2.73%, 500=3.45%, 750=2.18%, 1000=3.82% 00:27:50.481 lat (msec) : 2000=29.09%, >=2000=58.55% 00:27:50.481 cpu : usr=0.00%, sys=1.12%, ctx=922, majf=0, minf=32769 00:27:50.481 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.8%, >=64=88.5% 00:27:50.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.481 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.481 issued rwts: total=550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.481 job4: (groupid=0, jobs=1): err= 0: pid=1454507: Sun Jun 9 09:06:11 2024 00:27:50.481 read: IOPS=59, BW=59.7MiB/s (62.6MB/s)(607MiB/10162msec) 00:27:50.481 slat (usec): min=31, max=179673, avg=16494.01, stdev=27511.22 00:27:50.481 clat (msec): min=147, max=2943, avg=1909.41, stdev=561.06 00:27:50.481 lat (msec): min=165, max=2943, avg=1925.91, stdev=561.83 00:27:50.481 clat percentiles (msec): 00:27:50.481 | 1.00th=[ 288], 5.00th=[ 726], 10.00th=[ 1183], 20.00th=[ 1519], 00:27:50.481 | 30.00th=[ 1636], 40.00th=[ 1770], 50.00th=[ 2005], 60.00th=[ 2140], 00:27:50.481 | 70.00th=[ 2299], 80.00th=[ 2400], 90.00th=[ 2567], 95.00th=[ 2635], 00:27:50.481 | 99.00th=[ 2769], 99.50th=[ 2836], 99.90th=[ 2937], 99.95th=[ 2937], 00:27:50.481 | 99.99th=[ 2937] 00:27:50.481 bw ( KiB/s): min=26624, max=94208, per=1.24%, avg=61314.06, stdev=20468.36, samples=16 00:27:50.481 iops : min= 26, max= 92, avg=59.81, stdev=19.97, samples=16 00:27:50.482 lat (msec) : 250=0.82%, 500=1.81%, 750=2.64%, 1000=2.64%, 2000=42.01% 00:27:50.482 lat (msec) : >=2000=50.08% 00:27:50.482 cpu : usr=0.04%, sys=1.16%, ctx=949, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454508: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=52, BW=52.2MiB/s (54.8MB/s)(529MiB/10127msec) 00:27:50.482 slat (usec): min=43, max=239944, avg=18906.82, stdev=26590.27 00:27:50.482 clat (msec): min=121, max=3577, avg=2130.64, stdev=872.33 00:27:50.482 lat (msec): min=165, max=3593, avg=2149.55, stdev=873.88 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 199], 5.00th=[ 609], 10.00th=[ 1099], 20.00th=[ 1284], 00:27:50.482 | 30.00th=[ 1452], 40.00th=[ 1888], 50.00th=[ 2265], 60.00th=[ 2467], 00:27:50.482 | 70.00th=[ 2802], 80.00th=[ 2903], 90.00th=[ 3306], 95.00th=[ 3406], 00:27:50.482 | 99.00th=[ 3574], 99.50th=[ 3574], 99.90th=[ 3574], 99.95th=[ 3574], 00:27:50.482 | 99.99th=[ 3574] 00:27:50.482 bw ( KiB/s): min=16384, max=112640, per=1.11%, avg=54856.60, stdev=29641.36, samples=15 00:27:50.482 iops : min= 16, max= 110, avg=53.40, stdev=28.84, samples=15 00:27:50.482 lat (msec) : 250=2.84%, 500=1.51%, 750=2.46%, 1000=0.76%, 2000=35.73% 00:27:50.482 lat (msec) : >=2000=56.71% 00:27:50.482 cpu : usr=0.02%, sys=1.31%, ctx=984, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454509: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=61, BW=61.2MiB/s (64.2MB/s)(619MiB/10113msec) 00:27:50.482 slat (usec): min=31, max=244813, avg=16154.72, stdev=31368.22 00:27:50.482 clat (msec): min=110, max=2528, avg=1774.74, stdev=486.95 00:27:50.482 lat (msec): min=113, max=2609, avg=1790.89, stdev=486.27 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 127], 5.00th=[ 718], 10.00th=[ 1267], 20.00th=[ 1519], 00:27:50.482 | 30.00th=[ 1569], 40.00th=[ 1754], 50.00th=[ 1821], 60.00th=[ 1905], 00:27:50.482 | 70.00th=[ 2022], 80.00th=[ 2165], 90.00th=[ 2366], 95.00th=[ 2467], 00:27:50.482 | 99.00th=[ 2534], 99.50th=[ 2534], 99.90th=[ 2534], 99.95th=[ 2534], 00:27:50.482 | 99.99th=[ 2534] 00:27:50.482 bw ( KiB/s): min= 6131, max=96256, per=1.28%, avg=62967.75, stdev=26299.38, samples=16 00:27:50.482 iops : min= 5, max= 94, avg=61.37, stdev=25.84, samples=16 00:27:50.482 lat (msec) : 250=1.94%, 500=2.42%, 750=0.81%, 1000=1.45%, 2000=62.52% 00:27:50.482 lat (msec) : >=2000=30.86% 00:27:50.482 cpu : usr=0.03%, sys=1.09%, ctx=874, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454510: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=57, BW=57.6MiB/s (60.4MB/s)(585MiB/10150msec) 00:27:50.482 slat (usec): min=46, max=167100, avg=17092.16, stdev=28244.53 00:27:50.482 clat (msec): min=148, max=3022, avg=1963.69, stdev=541.60 00:27:50.482 lat (msec): min=207, max=3024, avg=1980.78, stdev=539.91 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 309], 5.00th=[ 751], 10.00th=[ 1485], 20.00th=[ 1687], 00:27:50.482 | 30.00th=[ 1737], 40.00th=[ 1854], 50.00th=[ 1989], 60.00th=[ 2106], 00:27:50.482 | 70.00th=[ 2198], 80.00th=[ 2333], 90.00th=[ 2635], 95.00th=[ 2836], 00:27:50.482 | 99.00th=[ 3004], 99.50th=[ 3037], 99.90th=[ 3037], 99.95th=[ 3037], 00:27:50.482 | 99.99th=[ 3037] 00:27:50.482 bw ( KiB/s): min=26624, max=114688, per=1.19%, avg=58606.56, stdev=23642.64, samples=16 00:27:50.482 iops : min= 26, max= 112, avg=57.06, stdev=23.19, samples=16 00:27:50.482 lat (msec) : 250=0.51%, 500=1.88%, 750=2.74%, 1000=2.22%, 2000=43.25% 00:27:50.482 lat (msec) : >=2000=49.40% 00:27:50.482 cpu : usr=0.05%, sys=1.06%, ctx=916, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454511: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=80, BW=80.6MiB/s (84.5MB/s)(821MiB/10184msec) 00:27:50.482 slat (usec): min=51, max=164145, avg=12227.49, stdev=25156.96 00:27:50.482 clat (msec): min=139, max=2436, avg=1433.97, stdev=478.94 00:27:50.482 lat (msec): min=291, max=2490, avg=1446.19, stdev=481.04 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 300], 5.00th=[ 634], 10.00th=[ 1083], 20.00th=[ 1133], 00:27:50.482 | 30.00th=[ 1150], 40.00th=[ 1217], 50.00th=[ 1318], 60.00th=[ 1469], 00:27:50.482 | 70.00th=[ 1620], 80.00th=[ 1754], 90.00th=[ 2265], 95.00th=[ 2366], 00:27:50.482 | 99.00th=[ 2433], 99.50th=[ 2433], 99.90th=[ 2433], 99.95th=[ 2433], 00:27:50.482 | 99.99th=[ 2433] 00:27:50.482 bw ( KiB/s): min=47104, max=122880, per=1.80%, avg=88704.00, stdev=26031.56, samples=16 00:27:50.482 iops : min= 46, max= 120, avg=86.62, stdev=25.42, samples=16 00:27:50.482 lat (msec) : 250=0.12%, 500=3.78%, 750=1.95%, 1000=3.78%, 2000=74.79% 00:27:50.482 lat (msec) : >=2000=15.59% 00:27:50.482 cpu : usr=0.03%, sys=1.55%, ctx=963, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.482 issued rwts: total=821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454512: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=63, BW=63.5MiB/s (66.6MB/s)(643MiB/10126msec) 00:27:50.482 slat (usec): min=30, max=229722, avg=15549.01, stdev=31965.75 00:27:50.482 clat (msec): min=125, max=2806, avg=1763.71, stdev=526.62 00:27:50.482 lat (msec): min=125, max=2817, avg=1779.26, stdev=527.73 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 292], 5.00th=[ 642], 10.00th=[ 1183], 20.00th=[ 1452], 00:27:50.482 | 30.00th=[ 1552], 40.00th=[ 1653], 50.00th=[ 1770], 60.00th=[ 1888], 00:27:50.482 | 70.00th=[ 1955], 80.00th=[ 2198], 90.00th=[ 2500], 95.00th=[ 2567], 00:27:50.482 | 99.00th=[ 2769], 99.50th=[ 2769], 99.90th=[ 2802], 99.95th=[ 2802], 00:27:50.482 | 99.99th=[ 2802] 00:27:50.482 bw ( KiB/s): min= 4087, max=124928, per=1.34%, avg=66047.44, stdev=28969.20, samples=16 00:27:50.482 iops : min= 3, max= 122, avg=64.44, stdev=28.43, samples=16 00:27:50.482 lat (msec) : 250=0.93%, 500=2.18%, 750=2.18%, 1000=2.18%, 2000=64.23% 00:27:50.482 lat (msec) : >=2000=28.30% 00:27:50.482 cpu : usr=0.01%, sys=1.09%, ctx=871, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454513: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=70, BW=70.0MiB/s (73.4MB/s)(712MiB/10166msec) 00:27:50.482 slat (usec): min=28, max=167958, avg=14196.45, stdev=28350.44 00:27:50.482 clat (msec): min=54, max=2360, avg=1668.88, stdev=472.92 00:27:50.482 lat (msec): min=197, max=2362, avg=1683.07, stdev=473.45 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 203], 5.00th=[ 550], 10.00th=[ 902], 20.00th=[ 1536], 00:27:50.482 | 30.00th=[ 1620], 40.00th=[ 1687], 50.00th=[ 1754], 60.00th=[ 1838], 00:27:50.482 | 70.00th=[ 1905], 80.00th=[ 1972], 90.00th=[ 2198], 95.00th=[ 2265], 00:27:50.482 | 99.00th=[ 2333], 99.50th=[ 2333], 99.90th=[ 2366], 99.95th=[ 2366], 00:27:50.482 | 99.99th=[ 2366] 00:27:50.482 bw ( KiB/s): min=38912, max=100352, per=1.43%, avg=70340.53, stdev=22532.65, samples=17 00:27:50.482 iops : min= 38, max= 98, avg=68.59, stdev=22.06, samples=17 00:27:50.482 lat (msec) : 100=0.14%, 250=2.11%, 500=2.11%, 750=4.92%, 1000=1.97% 00:27:50.482 lat (msec) : 2000=72.33%, >=2000=16.43% 00:27:50.482 cpu : usr=0.02%, sys=1.07%, ctx=924, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.482 issued rwts: total=712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.482 job4: (groupid=0, jobs=1): err= 0: pid=1454514: Sun Jun 9 09:06:11 2024 00:27:50.482 read: IOPS=102, BW=103MiB/s (108MB/s)(1040MiB/10111msec) 00:27:50.482 slat (usec): min=46, max=132902, avg=9627.23, stdev=25563.79 00:27:50.482 clat (msec): min=93, max=1542, avg=1168.08, stdev=203.72 00:27:50.482 lat (msec): min=123, max=1545, avg=1177.71, stdev=204.13 00:27:50.482 clat percentiles (msec): 00:27:50.482 | 1.00th=[ 255], 5.00th=[ 743], 10.00th=[ 1062], 20.00th=[ 1116], 00:27:50.482 | 30.00th=[ 1150], 40.00th=[ 1183], 50.00th=[ 1200], 60.00th=[ 1217], 00:27:50.482 | 70.00th=[ 1250], 80.00th=[ 1284], 90.00th=[ 1318], 95.00th=[ 1385], 00:27:50.482 | 99.00th=[ 1418], 99.50th=[ 1452], 99.90th=[ 1452], 99.95th=[ 1536], 00:27:50.482 | 99.99th=[ 1536] 00:27:50.482 bw ( KiB/s): min=80032, max=133120, per=2.10%, avg=103752.78, stdev=14271.96, samples=18 00:27:50.482 iops : min= 78, max= 130, avg=101.22, stdev=14.01, samples=18 00:27:50.482 lat (msec) : 100=0.10%, 250=0.77%, 500=2.31%, 750=2.02%, 1000=2.40% 00:27:50.482 lat (msec) : 2000=92.40% 00:27:50.482 cpu : usr=0.05%, sys=1.60%, ctx=945, majf=0, minf=32769 00:27:50.482 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=93.9% 00:27:50.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.482 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.482 issued rwts: total=1040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.483 job4: (groupid=0, jobs=1): err= 0: pid=1454515: Sun Jun 9 09:06:11 2024 00:27:50.483 read: IOPS=56, BW=56.6MiB/s (59.3MB/s)(574MiB/10143msec) 00:27:50.483 slat (usec): min=36, max=166935, avg=17455.07, stdev=31075.47 00:27:50.483 clat (msec): min=121, max=3808, avg=2122.36, stdev=901.38 00:27:50.483 lat (msec): min=156, max=3821, avg=2139.81, stdev=905.61 00:27:50.483 clat percentiles (msec): 00:27:50.483 | 1.00th=[ 176], 5.00th=[ 456], 10.00th=[ 852], 20.00th=[ 1368], 00:27:50.483 | 30.00th=[ 1519], 40.00th=[ 1938], 50.00th=[ 2198], 60.00th=[ 2400], 00:27:50.483 | 70.00th=[ 2567], 80.00th=[ 3004], 90.00th=[ 3373], 95.00th=[ 3440], 00:27:50.483 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:50.483 | 99.99th=[ 3809] 00:27:50.483 bw ( KiB/s): min=16351, max=88064, per=1.03%, avg=50729.61, stdev=21209.69, samples=18 00:27:50.483 iops : min= 15, max= 86, avg=49.39, stdev=20.77, samples=18 00:27:50.483 lat (msec) : 250=2.26%, 500=2.79%, 750=2.61%, 1000=4.01%, 2000=29.09% 00:27:50.483 lat (msec) : >=2000=59.23% 00:27:50.483 cpu : usr=0.03%, sys=1.14%, ctx=935, majf=0, minf=32769 00:27:50.483 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=89.0% 00:27:50.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.483 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.483 issued rwts: total=574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.483 job5: (groupid=0, jobs=1): err= 0: pid=1454521: Sun Jun 9 09:06:11 2024 00:27:50.483 read: IOPS=45, BW=45.9MiB/s (48.1MB/s)(464MiB/10118msec) 00:27:50.483 slat (usec): min=50, max=142730, avg=21597.70, stdev=31778.46 00:27:50.483 clat (msec): min=93, max=3750, avg=2313.12, stdev=899.84 00:27:50.483 lat (msec): min=121, max=3759, avg=2334.71, stdev=903.34 00:27:50.483 clat percentiles (msec): 00:27:50.483 | 1.00th=[ 127], 5.00th=[ 368], 10.00th=[ 902], 20.00th=[ 1552], 00:27:50.483 | 30.00th=[ 2198], 40.00th=[ 2433], 50.00th=[ 2500], 60.00th=[ 2567], 00:27:50.483 | 70.00th=[ 2702], 80.00th=[ 2903], 90.00th=[ 3473], 95.00th=[ 3608], 00:27:50.483 | 99.00th=[ 3742], 99.50th=[ 3742], 99.90th=[ 3742], 99.95th=[ 3742], 00:27:50.483 | 99.99th=[ 3742] 00:27:50.483 bw ( KiB/s): min=32768, max=67584, per=1.07%, avg=52922.54, stdev=13335.00, samples=13 00:27:50.483 iops : min= 32, max= 66, avg=51.62, stdev=12.95, samples=13 00:27:50.483 lat (msec) : 100=0.22%, 250=2.80%, 500=3.23%, 750=2.16%, 1000=3.23% 00:27:50.483 lat (msec) : 2000=12.93%, >=2000=75.43% 00:27:50.483 cpu : usr=0.03%, sys=1.07%, ctx=911, majf=0, minf=32769 00:27:50.483 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.4% 00:27:50.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.483 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.483 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.483 job5: (groupid=0, jobs=1): err= 0: pid=1454522: Sun Jun 9 09:06:11 2024 00:27:50.483 read: IOPS=63, BW=63.7MiB/s (66.8MB/s)(645MiB/10126msec) 00:27:50.483 slat (usec): min=30, max=201191, avg=15501.56, stdev=29001.44 00:27:50.483 clat (msec): min=124, max=2915, avg=1785.27, stdev=609.53 00:27:50.483 lat (msec): min=151, max=2940, avg=1800.77, stdev=610.72 00:27:50.483 clat percentiles (msec): 00:27:50.483 | 1.00th=[ 284], 5.00th=[ 609], 10.00th=[ 1200], 20.00th=[ 1301], 00:27:50.483 | 30.00th=[ 1418], 40.00th=[ 1586], 50.00th=[ 1737], 60.00th=[ 2022], 00:27:50.483 | 70.00th=[ 2265], 80.00th=[ 2333], 90.00th=[ 2567], 95.00th=[ 2735], 00:27:50.483 | 99.00th=[ 2802], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:27:50.483 | 99.99th=[ 2903] 00:27:50.483 bw ( KiB/s): min=51200, max=106496, per=1.43%, avg=70724.27, stdev=17632.29, samples=15 00:27:50.483 iops : min= 50, max= 104, avg=69.07, stdev=17.22, samples=15 00:27:50.483 lat (msec) : 250=0.78%, 500=3.41%, 750=2.02%, 1000=2.33%, 2000=50.23% 00:27:50.483 lat (msec) : >=2000=41.24% 00:27:50.483 cpu : usr=0.04%, sys=1.12%, ctx=937, majf=0, minf=32769 00:27:50.483 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:27:50.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.483 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.483 issued rwts: total=645,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.483 job5: (groupid=0, jobs=1): err= 0: pid=1454523: Sun Jun 9 09:06:11 2024 00:27:50.483 read: IOPS=56, BW=56.1MiB/s (58.8MB/s)(569MiB/10139msec) 00:27:50.483 slat (usec): min=42, max=329861, avg=17579.57, stdev=31301.54 00:27:50.483 clat (msec): min=133, max=3730, avg=1970.34, stdev=870.07 00:27:50.483 lat (msec): min=152, max=3769, avg=1987.92, stdev=873.87 00:27:50.483 clat percentiles (msec): 00:27:50.483 | 1.00th=[ 268], 5.00th=[ 418], 10.00th=[ 709], 20.00th=[ 1485], 00:27:50.483 | 30.00th=[ 1586], 40.00th=[ 1687], 50.00th=[ 1821], 60.00th=[ 2165], 00:27:50.483 | 70.00th=[ 2366], 80.00th=[ 2567], 90.00th=[ 3339], 95.00th=[ 3574], 00:27:50.484 | 99.00th=[ 3675], 99.50th=[ 3742], 99.90th=[ 3742], 99.95th=[ 3742], 00:27:50.484 | 99.99th=[ 3742] 00:27:50.484 bw ( KiB/s): min=22528, max=102400, per=1.31%, avg=64658.29, stdev=24108.31, samples=14 00:27:50.484 iops : min= 22, max= 100, avg=63.14, stdev=23.54, samples=14 00:27:50.484 lat (msec) : 250=0.70%, 500=5.27%, 750=5.45%, 1000=2.64%, 2000=40.95% 00:27:50.484 lat (msec) : >=2000=44.99% 00:27:50.484 cpu : usr=0.02%, sys=1.17%, ctx=973, majf=0, minf=32769 00:27:50.484 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:27:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.484 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.484 issued rwts: total=569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.484 job5: (groupid=0, jobs=1): err= 0: pid=1454524: Sun Jun 9 09:06:11 2024 00:27:50.484 read: IOPS=92, BW=92.4MiB/s (96.9MB/s)(936MiB/10128msec) 00:27:50.484 slat (usec): min=51, max=137951, avg=10679.02, stdev=18380.10 00:27:50.484 clat (msec): min=125, max=1568, avg=1292.51, stdev=264.45 00:27:50.484 lat (msec): min=128, max=1571, avg=1303.18, stdev=265.09 00:27:50.484 clat percentiles (msec): 00:27:50.484 | 1.00th=[ 234], 5.00th=[ 575], 10.00th=[ 1036], 20.00th=[ 1267], 00:27:50.484 | 30.00th=[ 1301], 40.00th=[ 1334], 50.00th=[ 1351], 60.00th=[ 1385], 00:27:50.484 | 70.00th=[ 1418], 80.00th=[ 1452], 90.00th=[ 1485], 95.00th=[ 1502], 00:27:50.484 | 99.00th=[ 1552], 99.50th=[ 1569], 99.90th=[ 1569], 99.95th=[ 1569], 00:27:50.484 | 99.99th=[ 1569] 00:27:50.484 bw ( KiB/s): min=28672, max=112640, per=1.87%, avg=92026.17, stdev=18258.90, samples=18 00:27:50.484 iops : min= 28, max= 110, avg=89.78, stdev=17.84, samples=18 00:27:50.484 lat (msec) : 250=1.07%, 500=3.10%, 750=2.67%, 1000=2.35%, 2000=90.81% 00:27:50.484 cpu : usr=0.05%, sys=1.81%, ctx=838, majf=0, minf=32769 00:27:50.484 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.3% 00:27:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.484 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:50.484 issued rwts: total=936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.484 job5: (groupid=0, jobs=1): err= 0: pid=1454525: Sun Jun 9 09:06:11 2024 00:27:50.484 read: IOPS=65, BW=65.9MiB/s (69.1MB/s)(668MiB/10140msec) 00:27:50.484 slat (usec): min=31, max=228417, avg=14990.04, stdev=27710.51 00:27:50.484 clat (msec): min=123, max=2262, avg=1720.01, stdev=442.25 00:27:50.484 lat (msec): min=146, max=2263, avg=1735.00, stdev=442.44 00:27:50.484 clat percentiles (msec): 00:27:50.484 | 1.00th=[ 178], 5.00th=[ 625], 10.00th=[ 1116], 20.00th=[ 1586], 00:27:50.484 | 30.00th=[ 1670], 40.00th=[ 1754], 50.00th=[ 1821], 60.00th=[ 1871], 00:27:50.484 | 70.00th=[ 1938], 80.00th=[ 2056], 90.00th=[ 2123], 95.00th=[ 2198], 00:27:50.484 | 99.00th=[ 2232], 99.50th=[ 2265], 99.90th=[ 2265], 99.95th=[ 2265], 00:27:50.484 | 99.99th=[ 2265] 00:27:50.484 bw ( KiB/s): min=34746, max=104657, per=1.40%, avg=69124.31, stdev=18694.07, samples=16 00:27:50.484 iops : min= 33, max= 102, avg=67.31, stdev=18.41, samples=16 00:27:50.484 lat (msec) : 250=2.84%, 500=1.35%, 750=1.80%, 1000=2.10%, 2000=67.81% 00:27:50.484 lat (msec) : >=2000=24.10% 00:27:50.484 cpu : usr=0.02%, sys=1.07%, ctx=896, majf=0, minf=32769 00:27:50.484 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:27:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.484 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.484 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.484 job5: (groupid=0, jobs=1): err= 0: pid=1454526: Sun Jun 9 09:06:11 2024 00:27:50.484 read: IOPS=60, BW=60.8MiB/s (63.8MB/s)(617MiB/10146msec) 00:27:50.484 slat (usec): min=32, max=199681, avg=16259.48, stdev=26510.76 00:27:50.484 clat (msec): min=110, max=2889, avg=1986.96, stdev=650.35 00:27:50.484 lat (msec): min=151, max=2933, avg=2003.22, stdev=652.58 00:27:50.484 clat percentiles (msec): 00:27:50.484 | 1.00th=[ 211], 5.00th=[ 514], 10.00th=[ 1028], 20.00th=[ 1485], 00:27:50.484 | 30.00th=[ 1821], 40.00th=[ 1905], 50.00th=[ 2056], 60.00th=[ 2265], 00:27:50.484 | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 2735], 95.00th=[ 2802], 00:27:50.484 | 99.00th=[ 2869], 99.50th=[ 2869], 99.90th=[ 2903], 99.95th=[ 2903], 00:27:50.484 | 99.99th=[ 2903] 00:27:50.484 bw ( KiB/s): min=34816, max=81920, per=1.13%, avg=55625.00, stdev=12785.03, samples=18 00:27:50.484 iops : min= 34, max= 80, avg=54.22, stdev=12.49, samples=18 00:27:50.484 lat (msec) : 250=1.30%, 500=3.08%, 750=2.43%, 1000=2.11%, 2000=38.25% 00:27:50.484 lat (msec) : >=2000=52.84% 00:27:50.484 cpu : usr=0.00%, sys=1.37%, ctx=968, majf=0, minf=32769 00:27:50.484 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:27:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.484 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.484 issued rwts: total=617,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.484 job5: (groupid=0, jobs=1): err= 0: pid=1454527: Sun Jun 9 09:06:11 2024 00:27:50.484 read: IOPS=47, BW=47.3MiB/s (49.6MB/s)(480MiB/10152msec) 00:27:50.484 slat (usec): min=38, max=190736, avg=20884.30, stdev=36200.38 00:27:50.484 clat (msec): min=124, max=4756, avg=2445.12, stdev=1248.38 00:27:50.484 lat (msec): min=277, max=4799, avg=2466.01, stdev=1253.19 00:27:50.484 clat percentiles (msec): 00:27:50.484 | 1.00th=[ 305], 5.00th=[ 485], 10.00th=[ 726], 20.00th=[ 1452], 00:27:50.484 | 30.00th=[ 1603], 40.00th=[ 1754], 50.00th=[ 2198], 60.00th=[ 2836], 00:27:50.484 | 70.00th=[ 3339], 80.00th=[ 3742], 90.00th=[ 4245], 95.00th=[ 4530], 00:27:50.484 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 4732], 99.95th=[ 4732], 00:27:50.484 | 99.99th=[ 4732] 00:27:50.484 bw ( KiB/s): min=10219, max=77824, per=0.86%, avg=42395.24, stdev=20502.16, samples=17 00:27:50.484 iops : min= 9, max= 76, avg=41.24, stdev=20.14, samples=17 00:27:50.484 lat (msec) : 250=0.21%, 500=5.00%, 750=5.00%, 1000=3.54%, 2000=29.79% 00:27:50.484 lat (msec) : >=2000=56.46% 00:27:50.484 cpu : usr=0.01%, sys=1.02%, ctx=947, majf=0, minf=32769 00:27:50.484 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.7%, >=64=86.9% 00:27:50.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.484 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.484 issued rwts: total=480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.484 job5: (groupid=0, jobs=1): err= 0: pid=1454528: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=77, BW=77.4MiB/s (81.1MB/s)(784MiB/10133msec) 00:27:50.485 slat (usec): min=38, max=286329, avg=12821.36, stdev=21216.56 00:27:50.485 clat (msec): min=75, max=2391, avg=1431.00, stdev=381.53 00:27:50.485 lat (msec): min=132, max=2393, avg=1443.82, stdev=382.22 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 178], 5.00th=[ 885], 10.00th=[ 1133], 20.00th=[ 1200], 00:27:50.485 | 30.00th=[ 1267], 40.00th=[ 1318], 50.00th=[ 1351], 60.00th=[ 1385], 00:27:50.485 | 70.00th=[ 1536], 80.00th=[ 1787], 90.00th=[ 1989], 95.00th=[ 2140], 00:27:50.485 | 99.00th=[ 2333], 99.50th=[ 2366], 99.90th=[ 2400], 99.95th=[ 2400], 00:27:50.485 | 99.99th=[ 2400] 00:27:50.485 bw ( KiB/s): min=30658, max=120832, per=1.70%, avg=83964.12, stdev=26700.43, samples=16 00:27:50.485 iops : min= 29, max= 118, avg=81.94, stdev=26.20, samples=16 00:27:50.485 lat (msec) : 100=0.13%, 250=1.15%, 500=0.77%, 750=1.53%, 1000=2.04% 00:27:50.485 lat (msec) : 2000=85.20%, >=2000=9.18% 00:27:50.485 cpu : usr=0.05%, sys=1.28%, ctx=930, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.485 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 job5: (groupid=0, jobs=1): err= 0: pid=1454529: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=50, BW=50.1MiB/s (52.6MB/s)(507MiB/10116msec) 00:27:50.485 slat (usec): min=36, max=168773, avg=19778.71, stdev=29252.59 00:27:50.485 clat (msec): min=85, max=3597, avg=2207.09, stdev=714.89 00:27:50.485 lat (msec): min=125, max=3637, avg=2226.87, stdev=714.36 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 226], 5.00th=[ 969], 10.00th=[ 1452], 20.00th=[ 1703], 00:27:50.485 | 30.00th=[ 1921], 40.00th=[ 2005], 50.00th=[ 2072], 60.00th=[ 2265], 00:27:50.485 | 70.00th=[ 2668], 80.00th=[ 2869], 90.00th=[ 3138], 95.00th=[ 3440], 00:27:50.485 | 99.00th=[ 3574], 99.50th=[ 3608], 99.90th=[ 3608], 99.95th=[ 3608], 00:27:50.485 | 99.99th=[ 3608] 00:27:50.485 bw ( KiB/s): min=26624, max=94208, per=1.12%, avg=55437.00, stdev=22010.67, samples=14 00:27:50.485 iops : min= 26, max= 92, avg=54.07, stdev=21.56, samples=14 00:27:50.485 lat (msec) : 100=0.20%, 250=0.99%, 500=1.58%, 750=0.39%, 1000=2.96% 00:27:50.485 lat (msec) : 2000=32.54%, >=2000=61.34% 00:27:50.485 cpu : usr=0.02%, sys=0.99%, ctx=906, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.3%, >=64=87.6% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.485 issued rwts: total=507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 job5: (groupid=0, jobs=1): err= 0: pid=1454530: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=51, BW=51.9MiB/s (54.4MB/s)(524MiB/10102msec) 00:27:50.485 slat (usec): min=36, max=197085, avg=19097.68, stdev=31911.74 00:27:50.485 clat (msec): min=91, max=3727, avg=2291.68, stdev=852.74 00:27:50.485 lat (msec): min=156, max=3785, avg=2310.78, stdev=854.69 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 226], 5.00th=[ 659], 10.00th=[ 1418], 20.00th=[ 1620], 00:27:50.485 | 30.00th=[ 1804], 40.00th=[ 2056], 50.00th=[ 2232], 60.00th=[ 2467], 00:27:50.485 | 70.00th=[ 2836], 80.00th=[ 3205], 90.00th=[ 3440], 95.00th=[ 3608], 00:27:50.485 | 99.00th=[ 3708], 99.50th=[ 3708], 99.90th=[ 3742], 99.95th=[ 3742], 00:27:50.485 | 99.99th=[ 3742] 00:27:50.485 bw ( KiB/s): min=12288, max=79872, per=0.92%, avg=45162.50, stdev=18302.21, samples=18 00:27:50.485 iops : min= 12, max= 78, avg=44.06, stdev=17.82, samples=18 00:27:50.485 lat (msec) : 100=0.19%, 250=1.34%, 500=1.72%, 750=2.29%, 1000=1.53% 00:27:50.485 lat (msec) : 2000=30.92%, >=2000=62.02% 00:27:50.485 cpu : usr=0.00%, sys=1.11%, ctx=865, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.1%, 32=6.1%, >=64=88.0% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.485 issued rwts: total=524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 job5: (groupid=0, jobs=1): err= 0: pid=1454531: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=53, BW=53.6MiB/s (56.2MB/s)(541MiB/10102msec) 00:27:50.485 slat (usec): min=29, max=182748, avg=18525.06, stdev=27233.27 00:27:50.485 clat (msec): min=77, max=3328, avg=1987.03, stdev=716.20 00:27:50.485 lat (msec): min=112, max=3352, avg=2005.55, stdev=718.97 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 129], 5.00th=[ 542], 10.00th=[ 1020], 20.00th=[ 1636], 00:27:50.485 | 30.00th=[ 1703], 40.00th=[ 1804], 50.00th=[ 1921], 60.00th=[ 2106], 00:27:50.485 | 70.00th=[ 2198], 80.00th=[ 2702], 90.00th=[ 3071], 95.00th=[ 3171], 00:27:50.485 | 99.00th=[ 3239], 99.50th=[ 3306], 99.90th=[ 3339], 99.95th=[ 3339], 00:27:50.485 | 99.99th=[ 3339] 00:27:50.485 bw ( KiB/s): min=24576, max=94208, per=1.22%, avg=60408.07, stdev=20914.71, samples=14 00:27:50.485 iops : min= 24, max= 92, avg=58.93, stdev=20.44, samples=14 00:27:50.485 lat (msec) : 100=0.18%, 250=1.66%, 500=3.14%, 750=3.14%, 1000=1.66% 00:27:50.485 lat (msec) : 2000=41.96%, >=2000=48.24% 00:27:50.485 cpu : usr=0.04%, sys=1.02%, ctx=929, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.485 issued rwts: total=541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 job5: (groupid=0, jobs=1): err= 0: pid=1454532: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=45, BW=45.4MiB/s (47.6MB/s)(460MiB/10130msec) 00:27:50.485 slat (usec): min=46, max=150900, avg=21747.09, stdev=30565.20 00:27:50.485 clat (msec): min=124, max=3295, avg=2420.40, stdev=749.32 00:27:50.485 lat (msec): min=152, max=3326, avg=2442.14, stdev=750.16 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 171], 5.00th=[ 894], 10.00th=[ 1267], 20.00th=[ 1888], 00:27:50.485 | 30.00th=[ 2198], 40.00th=[ 2333], 50.00th=[ 2534], 60.00th=[ 2836], 00:27:50.485 | 70.00th=[ 2970], 80.00th=[ 3071], 90.00th=[ 3171], 95.00th=[ 3239], 00:27:50.485 | 99.00th=[ 3272], 99.50th=[ 3272], 99.90th=[ 3306], 99.95th=[ 3306], 00:27:50.485 | 99.99th=[ 3306] 00:27:50.485 bw ( KiB/s): min=16384, max=79872, per=0.92%, avg=45461.73, stdev=19311.99, samples=15 00:27:50.485 iops : min= 16, max= 78, avg=44.33, stdev=18.92, samples=15 00:27:50.485 lat (msec) : 250=1.74%, 500=1.30%, 750=1.09%, 1000=2.17%, 2000=15.00% 00:27:50.485 lat (msec) : >=2000=78.70% 00:27:50.485 cpu : usr=0.00%, sys=1.14%, ctx=929, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:50.485 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 job5: (groupid=0, jobs=1): err= 0: pid=1454533: Sun Jun 9 09:06:11 2024 00:27:50.485 read: IOPS=55, BW=55.0MiB/s (57.7MB/s)(558MiB/10142msec) 00:27:50.485 slat (usec): min=32, max=194104, avg=17980.36, stdev=26184.85 00:27:50.485 clat (msec): min=105, max=3179, avg=2034.29, stdev=630.65 00:27:50.485 lat (msec): min=175, max=3181, avg=2052.27, stdev=631.34 00:27:50.485 clat percentiles (msec): 00:27:50.485 | 1.00th=[ 249], 5.00th=[ 634], 10.00th=[ 1116], 20.00th=[ 1804], 00:27:50.485 | 30.00th=[ 1905], 40.00th=[ 1972], 50.00th=[ 2022], 60.00th=[ 2089], 00:27:50.485 | 70.00th=[ 2232], 80.00th=[ 2635], 90.00th=[ 2869], 95.00th=[ 3037], 00:27:50.485 | 99.00th=[ 3104], 99.50th=[ 3138], 99.90th=[ 3171], 99.95th=[ 3171], 00:27:50.485 | 99.99th=[ 3171] 00:27:50.485 bw ( KiB/s): min=24625, max=75776, per=1.12%, avg=55064.50, stdev=14440.18, samples=16 00:27:50.485 iops : min= 24, max= 74, avg=53.69, stdev=14.15, samples=16 00:27:50.485 lat (msec) : 250=1.08%, 500=2.51%, 750=2.15%, 1000=2.69%, 2000=37.46% 00:27:50.485 lat (msec) : >=2000=54.12% 00:27:50.485 cpu : usr=0.02%, sys=1.04%, ctx=929, majf=0, minf=32769 00:27:50.485 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.7%, >=64=88.7% 00:27:50.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.485 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:50.485 issued rwts: total=558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.485 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:50.485 00:27:50.485 Run status group 0 (all jobs): 00:27:50.485 READ: bw=4817MiB/s (5051MB/s), 32.3MiB/s-103MiB/s (33.8MB/s-108MB/s), io=48.0GiB (51.6GB), run=10079-10209msec 00:27:50.485 00:27:50.485 Disk stats (read/write): 00:27:50.485 nvme0n1: ios=63212/0, merge=0/0, ticks=9617583/0, in_queue=9617583, util=98.59% 00:27:50.485 nvme2n1: ios=60569/0, merge=0/0, ticks=9356836/0, in_queue=9356836, util=98.69% 00:27:50.485 nvme3n1: ios=60741/0, merge=0/0, ticks=9445227/0, in_queue=9445227, util=98.60% 00:27:50.485 nvme4n1: ios=73360/0, merge=0/0, ticks=11205871/0, in_queue=11205871, util=98.99% 00:27:50.485 nvme5n1: ios=68218/0, merge=0/0, ticks=10298334/0, in_queue=10298334, util=99.03% 00:27:50.485 nvme6n1: ios=60907/0, merge=0/0, ticks=9472629/0, in_queue=9472629, util=98.76% 00:27:50.485 09:06:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:27:50.485 09:06:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:27:50.485 09:06:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:50.485 09:06:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:27:50.485 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000000 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000000 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.485 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.486 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:50.486 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.486 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:50.486 09:06:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:50.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000001 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000001 00:27:50.743 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:50.744 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.744 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.744 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.002 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.002 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:51.002 09:06:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:51.936 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:51.936 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:27:51.936 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:51.936 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:51.936 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000002 00:27:51.936 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000002 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:51.937 09:06:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:52.503 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000003 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:52.503 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000003 00:27:52.761 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:52.761 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:52.761 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.762 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:52.762 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.762 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:52.762 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:53.698 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000004 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000004 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:53.698 09:06:15 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:54.266 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:54.266 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:27:54.266 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1218 -- # local i=0 00:27:54.266 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # lsblk -o NAME,SERIAL 00:27:54.266 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # grep -q -w SPDK00000000000005 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # lsblk -l -o NAME,SERIAL 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1226 -- # grep -q -w SPDK00000000000005 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # return 0 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:27:54.525 rmmod nvme_rdma 00:27:54.525 rmmod nvme_fabrics 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1453668 ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1453668 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@949 -- # '[' -z 1453668 ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@953 -- # kill -0 1453668 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # uname 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1453668 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1453668' 00:27:54.525 killing process with pid 1453668 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # kill 1453668 00:27:54.525 09:06:16 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # wait 1453668 00:27:54.785 09:06:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.785 09:06:17 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:27:54.785 00:27:54.785 real 0m24.667s 00:27:54.785 user 1m24.767s 00:27:54.785 sys 0m15.351s 00:27:54.785 09:06:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # xtrace_disable 00:27:54.785 09:06:17 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:54.785 ************************************ 00:27:54.785 END TEST nvmf_srq_overwhelm 00:27:54.785 ************************************ 00:27:54.785 09:06:17 nvmf_rdma -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:54.785 09:06:17 nvmf_rdma -- common/autotest_common.sh@1100 -- # '[' 3 -le 1 ']' 00:27:54.785 09:06:17 nvmf_rdma -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:54.785 09:06:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:27:55.044 ************************************ 00:27:55.044 START TEST nvmf_shutdown 00:27:55.044 ************************************ 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:55.044 * Looking for test storage... 00:27:55.044 * Found test storage at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801347e8-3fd0-e911-906e-0017a4403562 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801347e8-3fd0-e911-906e-0017a4403562 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/common.sh 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:55.044 09:06:17 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:55.045 ************************************ 00:27:55.045 START TEST nvmf_shutdown_tc1 00:27:55.045 ************************************ 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc1 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:55.045 09:06:17 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:00.412 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:00.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:00.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # modinfo irdma 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:00.413 Found net devices under 0000:af:00.0: cvl_0_0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:00.413 Found net devices under 0000:af:00.1: cvl_0_1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:00.413 12: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:00.413 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:00.413 altname enp175s0f0np0 00:28:00.413 altname ens801f0np0 00:28:00.413 inet 192.168.100.8/24 scope global cvl_0_0 00:28:00.413 valid_lft forever preferred_lft forever 00:28:00.413 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:00.413 valid_lft forever preferred_lft forever 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:00.413 13: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:00.413 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:00.413 altname enp175s0f1np1 00:28:00.413 altname ens801f1np1 00:28:00.413 inet 192.168.100.9/24 scope global cvl_0_1 00:28:00.413 valid_lft forever preferred_lft forever 00:28:00.413 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:00.413 valid_lft forever preferred_lft forever 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:00.413 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:00.414 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:00.414 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.414 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:00.414 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:00.414 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:00.672 09:06:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:00.672 192.168.100.9' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:00.672 192.168.100.9' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:00.672 192.168.100.9' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1460195 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1460195 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1460195 ']' 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:00.672 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:00.672 [2024-06-09 09:06:23.093865] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:00.672 [2024-06-09 09:06:23.093920] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.672 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.672 [2024-06-09 09:06:23.151117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.672 [2024-06-09 09:06:23.228238] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.672 [2024-06-09 09:06:23.228277] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.672 [2024-06-09 09:06:23.228284] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.672 [2024-06-09 09:06:23.228290] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.672 [2024-06-09 09:06:23.228295] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.672 [2024-06-09 09:06:23.228336] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.672 [2024-06-09 09:06:23.228445] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.672 [2024-06-09 09:06:23.228574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.672 [2024-06-09 09:06:23.228574] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 [2024-06-09 09:06:23.943583] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x1842be0/0x1842220) succeed. 00:28:01.608 [2024-06-09 09:06:23.952937] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x1843f90/0x18427a0) succeed. 00:28:01.608 [2024-06-09 09:06:23.952959] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:01.608 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:28:01.608 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:01.608 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.608 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:01.608 Malloc1 00:28:01.608 [2024-06-09 09:06:24.052151] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:01.608 Malloc2 00:28:01.608 Malloc3 00:28:01.608 Malloc4 00:28:01.867 Malloc5 00:28:01.867 Malloc6 00:28:01.867 Malloc7 00:28:01.867 Malloc8 00:28:01.867 Malloc9 00:28:02.126 Malloc10 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1460472 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1460472 /var/tmp/bdevperf.sock 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # '[' -z 1460472 ']' 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:02.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 [2024-06-09 09:06:24.528923] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:02.126 [2024-06-09 09:06:24.528979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.126 "trtype": "$TEST_TRANSPORT", 00:28:02.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.126 "adrfam": "ipv4", 00:28:02.126 "trsvcid": "$NVMF_PORT", 00:28:02.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.126 "hdgst": ${hdgst:-false}, 00:28:02.126 "ddgst": ${ddgst:-false} 00:28:02.126 }, 00:28:02.126 "method": "bdev_nvme_attach_controller" 00:28:02.126 } 00:28:02.126 EOF 00:28:02.126 )") 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.126 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.126 { 00:28:02.126 "params": { 00:28:02.126 "name": "Nvme$subsystem", 00:28:02.127 "trtype": "$TEST_TRANSPORT", 00:28:02.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "$NVMF_PORT", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.127 "hdgst": ${hdgst:-false}, 00:28:02.127 "ddgst": ${ddgst:-false} 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 } 00:28:02.127 EOF 00:28:02.127 )") 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:02.127 { 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme$subsystem", 00:28:02.127 "trtype": "$TEST_TRANSPORT", 00:28:02.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "$NVMF_PORT", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:02.127 "hdgst": ${hdgst:-false}, 00:28:02.127 "ddgst": ${ddgst:-false} 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 } 00:28:02.127 EOF 00:28:02.127 )") 00:28:02.127 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:02.127 09:06:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme1", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme2", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme3", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme4", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme5", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme6", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme7", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme8", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme9", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 },{ 00:28:02.127 "params": { 00:28:02.127 "name": "Nvme10", 00:28:02.127 "trtype": "rdma", 00:28:02.127 "traddr": "192.168.100.8", 00:28:02.127 "adrfam": "ipv4", 00:28:02.127 "trsvcid": "4420", 00:28:02.127 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:02.127 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:02.127 "hdgst": false, 00:28:02.127 "ddgst": false 00:28:02.127 }, 00:28:02.127 "method": "bdev_nvme_attach_controller" 00:28:02.127 }' 00:28:02.127 [2024-06-09 09:06:24.584680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.127 [2024-06-09 09:06:24.656309] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@863 -- # return 0 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1460472 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:28:03.063 09:06:25 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:28:04.000 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1460472 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1460195 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.000 [2024-06-09 09:06:26.553274] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:04.000 [2024-06-09 09:06:26.553328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1460916 ] 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.000 { 00:28:04.000 "params": { 00:28:04.000 "name": "Nvme$subsystem", 00:28:04.000 "trtype": "$TEST_TRANSPORT", 00:28:04.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.000 "adrfam": "ipv4", 00:28:04.000 "trsvcid": "$NVMF_PORT", 00:28:04.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.000 "hdgst": ${hdgst:-false}, 00:28:04.000 "ddgst": ${ddgst:-false} 00:28:04.000 }, 00:28:04.000 "method": "bdev_nvme_attach_controller" 00:28:04.000 } 00:28:04.000 EOF 00:28:04.000 )") 00:28:04.000 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.260 { 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme$subsystem", 00:28:04.260 "trtype": "$TEST_TRANSPORT", 00:28:04.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "$NVMF_PORT", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.260 "hdgst": ${hdgst:-false}, 00:28:04.260 "ddgst": ${ddgst:-false} 00:28:04.260 }, 00:28:04.260 "method": "bdev_nvme_attach_controller" 00:28:04.260 } 00:28:04.260 EOF 00:28:04.260 )") 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:04.260 { 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme$subsystem", 00:28:04.260 "trtype": "$TEST_TRANSPORT", 00:28:04.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "$NVMF_PORT", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.260 "hdgst": ${hdgst:-false}, 00:28:04.260 "ddgst": ${ddgst:-false} 00:28:04.260 }, 00:28:04.260 "method": "bdev_nvme_attach_controller" 00:28:04.260 } 00:28:04.260 EOF 00:28:04.260 )") 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:28:04.260 09:06:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme1", 00:28:04.260 "trtype": "rdma", 00:28:04.260 "traddr": "192.168.100.8", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "4420", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.260 "hdgst": false, 00:28:04.260 "ddgst": false 00:28:04.260 }, 00:28:04.260 "method": "bdev_nvme_attach_controller" 00:28:04.260 },{ 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme2", 00:28:04.260 "trtype": "rdma", 00:28:04.260 "traddr": "192.168.100.8", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "4420", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:04.260 "hdgst": false, 00:28:04.260 "ddgst": false 00:28:04.260 }, 00:28:04.260 "method": "bdev_nvme_attach_controller" 00:28:04.260 },{ 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme3", 00:28:04.260 "trtype": "rdma", 00:28:04.260 "traddr": "192.168.100.8", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "4420", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:04.260 "hdgst": false, 00:28:04.260 "ddgst": false 00:28:04.260 }, 00:28:04.260 "method": "bdev_nvme_attach_controller" 00:28:04.260 },{ 00:28:04.260 "params": { 00:28:04.260 "name": "Nvme4", 00:28:04.260 "trtype": "rdma", 00:28:04.260 "traddr": "192.168.100.8", 00:28:04.260 "adrfam": "ipv4", 00:28:04.260 "trsvcid": "4420", 00:28:04.260 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:04.260 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:04.260 "hdgst": false, 00:28:04.260 "ddgst": false 00:28:04.260 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme5", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme6", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme7", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme8", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme9", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 },{ 00:28:04.261 "params": { 00:28:04.261 "name": "Nvme10", 00:28:04.261 "trtype": "rdma", 00:28:04.261 "traddr": "192.168.100.8", 00:28:04.261 "adrfam": "ipv4", 00:28:04.261 "trsvcid": "4420", 00:28:04.261 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:04.261 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:04.261 "hdgst": false, 00:28:04.261 "ddgst": false 00:28:04.261 }, 00:28:04.261 "method": "bdev_nvme_attach_controller" 00:28:04.261 }' 00:28:04.261 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.261 [2024-06-09 09:06:26.611093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.261 [2024-06-09 09:06:26.684225] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.197 Running I/O for 1 seconds... 00:28:06.574 00:28:06.574 Latency(us) 00:28:06.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.574 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.574 Verification LBA range: start 0x0 length 0x400 00:28:06.574 Nvme1n1 : 1.05 365.93 22.87 0.00 0.00 172940.27 44439.65 172765.38 00:28:06.574 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.574 Verification LBA range: start 0x0 length 0x400 00:28:06.574 Nvme2n1 : 1.18 380.17 23.76 0.00 0.00 162880.02 8301.23 159783.01 00:28:06.574 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.574 Verification LBA range: start 0x0 length 0x400 00:28:06.574 Nvme3n1 : 1.17 383.42 23.96 0.00 0.00 160867.47 8800.55 143804.71 00:28:06.574 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.574 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme4n1 : 1.16 386.13 24.13 0.00 0.00 157380.27 6865.68 127826.41 00:28:06.575 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme5n1 : 1.17 383.01 23.94 0.00 0.00 155846.70 7645.87 130822.34 00:28:06.575 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme6n1 : 1.17 382.55 23.91 0.00 0.00 153947.05 8176.40 123332.51 00:28:06.575 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme7n1 : 1.17 382.07 23.88 0.00 0.00 151920.15 8800.55 114344.72 00:28:06.575 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme8n1 : 1.17 381.65 23.85 0.00 0.00 149501.14 9175.04 101861.67 00:28:06.575 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme9n1 : 1.18 381.12 23.82 0.00 0.00 147924.95 10048.85 108852.18 00:28:06.575 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:06.575 Verification LBA range: start 0x0 length 0x400 00:28:06.575 Nvme10n1 : 1.18 325.56 20.35 0.00 0.00 170271.90 3151.97 227690.79 00:28:06.575 =================================================================================================================== 00:28:06.575 Total : 3751.61 234.48 0.00 0.00 157958.05 3151.97 227690.79 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.575 09:06:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:06.575 rmmod nvme_rdma 00:28:06.575 rmmod nvme_fabrics 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1460195 ']' 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1460195 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@949 -- # '[' -z 1460195 ']' 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # kill -0 1460195 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # uname 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1460195 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1460195' 00:28:06.575 killing process with pid 1460195 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # kill 1460195 00:28:06.575 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # wait 1460195 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:07.143 00:28:07.143 real 0m11.983s 00:28:07.143 user 0m29.632s 00:28:07.143 sys 0m5.229s 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:07.143 ************************************ 00:28:07.143 END TEST nvmf_shutdown_tc1 00:28:07.143 ************************************ 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:07.143 ************************************ 00:28:07.143 START TEST nvmf_shutdown_tc2 00:28:07.143 ************************************ 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc2 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:07.143 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:07.143 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:07.143 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # modinfo irdma 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:07.144 Found net devices under 0000:af:00.0: cvl_0_0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:07.144 Found net devices under 0000:af:00.1: cvl_0_1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:07.144 12: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:07.144 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:07.144 altname enp175s0f0np0 00:28:07.144 altname ens801f0np0 00:28:07.144 inet 192.168.100.8/24 scope global cvl_0_0 00:28:07.144 valid_lft forever preferred_lft forever 00:28:07.144 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:07.144 valid_lft forever preferred_lft forever 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:07.144 13: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:07.144 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:07.144 altname enp175s0f1np1 00:28:07.144 altname ens801f1np1 00:28:07.144 inet 192.168.100.9/24 scope global cvl_0_1 00:28:07.144 valid_lft forever preferred_lft forever 00:28:07.144 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:07.144 valid_lft forever preferred_lft forever 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:07.144 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:07.402 192.168.100.9' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:07.402 192.168.100.9' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:07.402 192.168.100.9' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:07.402 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1461509 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1461509 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1461509 ']' 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:07.403 09:06:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:07.403 [2024-06-09 09:06:29.814825] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:07.403 [2024-06-09 09:06:29.814865] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:07.403 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.403 [2024-06-09 09:06:29.869974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:07.403 [2024-06-09 09:06:29.947957] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:07.403 [2024-06-09 09:06:29.947991] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:07.403 [2024-06-09 09:06:29.947998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:07.403 [2024-06-09 09:06:29.948004] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:07.403 [2024-06-09 09:06:29.948009] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:07.403 [2024-06-09 09:06:29.948104] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:07.403 [2024-06-09 09:06:29.948187] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:07.403 [2024-06-09 09:06:29.948293] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:07.403 [2024-06-09 09:06:29.948294] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.338 [2024-06-09 09:06:30.684098] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0x56bbe0/0x56b220) succeed. 00:28:08.338 [2024-06-09 09:06:30.692941] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0x56cf90/0x56b7a0) succeed. 00:28:08.338 [2024-06-09 09:06:30.692961] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:08.338 09:06:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.338 Malloc1 00:28:08.338 [2024-06-09 09:06:30.792163] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:08.338 Malloc2 00:28:08.338 Malloc3 00:28:08.597 Malloc4 00:28:08.597 Malloc5 00:28:08.597 Malloc6 00:28:08.597 Malloc7 00:28:08.597 Malloc8 00:28:08.597 Malloc9 00:28:08.856 Malloc10 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1461785 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1461785 /var/tmp/bdevperf.sock 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # '[' -z 1461785 ']' 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:08.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:08.856 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 [2024-06-09 09:06:31.266448] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:08.857 [2024-06-09 09:06:31.266496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461785 ] 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:08.857 { 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme$subsystem", 00:28:08.857 "trtype": "$TEST_TRANSPORT", 00:28:08.857 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "$NVMF_PORT", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:08.857 "hdgst": ${hdgst:-false}, 00:28:08.857 "ddgst": ${ddgst:-false} 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 } 00:28:08.857 EOF 00:28:08.857 )") 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:28:08.857 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:28:08.857 09:06:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme1", 00:28:08.857 "trtype": "rdma", 00:28:08.857 "traddr": "192.168.100.8", 00:28:08.857 "adrfam": "ipv4", 00:28:08.857 "trsvcid": "4420", 00:28:08.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.857 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:08.857 "hdgst": false, 00:28:08.857 "ddgst": false 00:28:08.857 }, 00:28:08.857 "method": "bdev_nvme_attach_controller" 00:28:08.857 },{ 00:28:08.857 "params": { 00:28:08.857 "name": "Nvme2", 00:28:08.857 "trtype": "rdma", 00:28:08.857 "traddr": "192.168.100.8", 00:28:08.857 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme3", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme4", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme5", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme6", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme7", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme8", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme9", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 },{ 00:28:08.858 "params": { 00:28:08.858 "name": "Nvme10", 00:28:08.858 "trtype": "rdma", 00:28:08.858 "traddr": "192.168.100.8", 00:28:08.858 "adrfam": "ipv4", 00:28:08.858 "trsvcid": "4420", 00:28:08.858 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:08.858 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:08.858 "hdgst": false, 00:28:08.858 "ddgst": false 00:28:08.858 }, 00:28:08.858 "method": "bdev_nvme_attach_controller" 00:28:08.858 }' 00:28:08.858 [2024-06-09 09:06:31.320508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.858 [2024-06-09 09:06:31.392017] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.794 Running I/O for 10 seconds... 00:28:09.794 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:09.794 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@863 -- # return 0 00:28:09.794 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:09.794 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:09.794 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.053 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.053 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:10.053 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:10.053 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:10.053 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:10.054 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:10.312 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:10.313 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:10.313 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:10.313 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:10.313 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:10.313 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1461785 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1461785 ']' 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1461785 00:28:10.572 09:06:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1461785 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_0 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_0 = sudo ']' 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1461785' 00:28:10.572 killing process with pid 1461785 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1461785 00:28:10.572 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1461785 00:28:10.572 Received shutdown signal, test time was about 0.857761 seconds 00:28:10.572 00:28:10.572 Latency(us) 00:28:10.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme1n1 : 0.85 377.49 23.59 0.00 0.00 167187.02 10298.51 173764.02 00:28:10.572 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme2n1 : 0.85 376.72 23.54 0.00 0.00 164476.83 11359.57 160781.65 00:28:10.572 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme3n1 : 0.84 381.48 23.84 0.00 0.00 159327.48 6054.28 143804.71 00:28:10.572 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme4n1 : 0.84 380.66 23.79 0.00 0.00 156406.93 22469.49 125829.12 00:28:10.572 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme5n1 : 0.85 376.37 23.52 0.00 0.00 153674.75 7770.70 146800.64 00:28:10.572 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme6n1 : 0.85 375.78 23.49 0.00 0.00 151562.04 8051.57 147799.28 00:28:10.572 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme7n1 : 0.85 375.27 23.45 0.00 0.00 148267.11 8363.64 139810.13 00:28:10.572 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme8n1 : 0.85 374.78 23.42 0.00 0.00 145083.34 8613.30 126328.44 00:28:10.572 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme9n1 : 0.86 374.06 23.38 0.00 0.00 143088.88 9362.29 119337.94 00:28:10.572 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:10.572 Verification LBA range: start 0x0 length 0x400 00:28:10.572 Nvme10n1 : 0.86 298.67 18.67 0.00 0.00 175228.34 10298.51 255652.82 00:28:10.572 =================================================================================================================== 00:28:10.572 Total : 3691.27 230.70 0.00 0.00 156046.64 6054.28 255652.82 00:28:10.831 09:06:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:12.207 rmmod nvme_rdma 00:28:12.207 rmmod nvme_fabrics 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1461509 ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@949 -- # '[' -z 1461509 ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # kill -0 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # uname 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1461509' 00:28:12.207 killing process with pid 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # kill 1461509 00:28:12.207 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # wait 1461509 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:12.467 00:28:12.467 real 0m5.330s 00:28:12.467 user 0m21.831s 00:28:12.467 sys 0m1.110s 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # xtrace_disable 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 ************************************ 00:28:12.467 END TEST nvmf_shutdown_tc2 00:28:12.467 ************************************ 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1100 -- # '[' 2 -le 1 ']' 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1106 -- # xtrace_disable 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 ************************************ 00:28:12.467 START TEST nvmf_shutdown_tc3 00:28:12.467 ************************************ 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # nvmf_shutdown_tc3 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:12.467 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.467 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:12.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ rdma == rdma ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@373 -- # [[ -e /sys/module/irdma/parameters/roce_ena ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@375 -- # (( 1 != 1 )) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # modinfo irdma 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # modprobe irdma roce_ena=1 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:12.468 Found net devices under 0000:af:00.0: cvl_0_0 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:12.468 Found net devices under 0000:af:00.1: cvl_0_1 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:28:12.468 09:06:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:28:12.468 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:28:12.468 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:28:12.468 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:28:12.468 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:12.727 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show cvl_0_0 00:28:12.728 12: cvl_0_0: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:12.728 link/ether b4:96:91:a5:c8:d4 brd ff:ff:ff:ff:ff:ff 00:28:12.728 altname enp175s0f0np0 00:28:12.728 altname ens801f0np0 00:28:12.728 inet 192.168.100.8/24 scope global cvl_0_0 00:28:12.728 valid_lft forever preferred_lft forever 00:28:12.728 inet6 fe80::b696:91ff:fea5:c8d4/64 scope link proto kernel_ll 00:28:12.728 valid_lft forever preferred_lft forever 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show cvl_0_1 00:28:12.728 13: cvl_0_1: mtu 1500 qdisc mq state UP group default qlen 1000 00:28:12.728 link/ether b4:96:91:a5:c8:d5 brd ff:ff:ff:ff:ff:ff 00:28:12.728 altname enp175s0f1np1 00:28:12.728 altname ens801f1np1 00:28:12.728 inet 192.168.100.9/24 scope global cvl_0_1 00:28:12.728 valid_lft forever preferred_lft forever 00:28:12.728 inet6 fe80::b696:91ff:fea5:c8d5/64 scope link proto kernel_ll 00:28:12.728 valid_lft forever preferred_lft forever 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_0 == \c\v\l\_\0\_\0 ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\0 ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ cvl_0_1 == \c\v\l\_\0\_\1 ]] 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_0 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show cvl_0_1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:28:12.728 192.168.100.9' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:28:12.728 192.168.100.9' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:28:12.728 192.168.100.9' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1462578 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1462578 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1462578 ']' 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:12.728 09:06:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:12.728 [2024-06-09 09:06:35.237341] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:12.728 [2024-06-09 09:06:35.237388] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.728 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.987 [2024-06-09 09:06:35.293049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.987 [2024-06-09 09:06:35.367570] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.987 [2024-06-09 09:06:35.367610] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.987 [2024-06-09 09:06:35.367617] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.987 [2024-06-09 09:06:35.367622] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.987 [2024-06-09 09:06:35.367627] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.987 [2024-06-09 09:06:35.367750] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 2 00:28:12.987 [2024-06-09 09:06:35.367842] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 3 00:28:12.987 [2024-06-09 09:06:35.367950] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 1 00:28:12.987 [2024-06-09 09:06:35.367951] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 4 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.554 [2024-06-09 09:06:36.096256] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f0(0xf96be0/0xf96220) succeed. 00:28:13.554 [2024-06-09 09:06:36.105053] rdma.c:2574:create_ib_device: *NOTICE*: Create IB device rocep175s0f1(0xf97f90/0xf967a0) succeed. 00:28:13.554 [2024-06-09 09:06:36.105074] rdma.c:2793:nvmf_rdma_create: *NOTICE*: Adjusting the io unit size to fit the device's maximum I/O size. New I/O unit size 24576 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@723 -- # xtrace_disable 00:28:13.554 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:13.813 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:13.813 Malloc1 00:28:13.813 [2024-06-09 09:06:36.199912] rdma.c:3029:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:13.813 Malloc2 00:28:13.813 Malloc3 00:28:13.813 Malloc4 00:28:13.813 Malloc5 00:28:14.072 Malloc6 00:28:14.072 Malloc7 00:28:14.072 Malloc8 00:28:14.072 Malloc9 00:28:14.072 Malloc10 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@729 -- # xtrace_disable 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1462857 00:28:14.072 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1462857 /var/tmp/bdevperf.sock 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # '[' -z 1462857 ']' 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local max_retries=100 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:14.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # xtrace_disable 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.332 [2024-06-09 09:06:36.670044] Starting SPDK v24.09-pre git sha1 e55c9a812 / DPDK 24.03.0 initialization... 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.332 { 00:28:14.332 "params": { 00:28:14.332 "name": "Nvme$subsystem", 00:28:14.332 "trtype": "$TEST_TRANSPORT", 00:28:14.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.332 "adrfam": "ipv4", 00:28:14.332 "trsvcid": "$NVMF_PORT", 00:28:14.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.332 "hdgst": ${hdgst:-false}, 00:28:14.332 "ddgst": ${ddgst:-false} 00:28:14.332 }, 00:28:14.332 "method": "bdev_nvme_attach_controller" 00:28:14.332 } 00:28:14.332 EOF 00:28:14.332 )") 00:28:14.332 [2024-06-09 09:06:36.670090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462857 ] 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.332 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.333 { 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme$subsystem", 00:28:14.333 "trtype": "$TEST_TRANSPORT", 00:28:14.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "$NVMF_PORT", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.333 "hdgst": ${hdgst:-false}, 00:28:14.333 "ddgst": ${ddgst:-false} 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 } 00:28:14.333 EOF 00:28:14.333 )") 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.333 { 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme$subsystem", 00:28:14.333 "trtype": "$TEST_TRANSPORT", 00:28:14.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "$NVMF_PORT", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.333 "hdgst": ${hdgst:-false}, 00:28:14.333 "ddgst": ${ddgst:-false} 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 } 00:28:14.333 EOF 00:28:14.333 )") 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:14.333 { 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme$subsystem", 00:28:14.333 "trtype": "$TEST_TRANSPORT", 00:28:14.333 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "$NVMF_PORT", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.333 "hdgst": ${hdgst:-false}, 00:28:14.333 "ddgst": ${ddgst:-false} 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 } 00:28:14.333 EOF 00:28:14.333 )") 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:14.333 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:14.333 09:06:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme1", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme2", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme3", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme4", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme5", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme6", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme7", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme8", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme9", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 },{ 00:28:14.333 "params": { 00:28:14.333 "name": "Nvme10", 00:28:14.333 "trtype": "rdma", 00:28:14.333 "traddr": "192.168.100.8", 00:28:14.333 "adrfam": "ipv4", 00:28:14.333 "trsvcid": "4420", 00:28:14.333 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:14.333 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:14.333 "hdgst": false, 00:28:14.333 "ddgst": false 00:28:14.333 }, 00:28:14.333 "method": "bdev_nvme_attach_controller" 00:28:14.333 }' 00:28:14.333 [2024-06-09 09:06:36.726104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.333 [2024-06-09 09:06:36.797918] reactor.c: 929:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.269 Running I/O for 10 seconds... 00:28:15.269 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@859 -- # (( i == 0 )) 00:28:15.269 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@863 -- # return 0 00:28:15.269 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:15.269 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.269 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:28:15.528 09:06:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:15.787 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:16.045 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1462578 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1462578 ']' 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1462578 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # uname 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' Linux = Linux ']' 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # ps --no-headers -o comm= 1462578 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # process_name=reactor_1 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' reactor_1 = sudo ']' 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # echo 'killing process with pid 1462578' 00:28:16.046 killing process with pid 1462578 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # kill 1462578 00:28:16.046 09:06:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # wait 1462578 00:28:16.046 nvmf_tgt: rdma.c:4722: nvmf_rdma_poller_poll: Assertion `wc[i].opcode == IBV_WC_RDMA_READ' failed. 00:28:16.615 [2024-06-09 09:06:39.023838] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.023884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.023895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.023919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.023927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.023939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8cf700 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.023950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.023961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.023967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.023979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.023985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.023997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a87f480 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a84f300 len:0x10000 key:0x54ccea8f 00:28:16.615 [2024-06-09 09:06:39.024092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a45f980 len:0x10000 key:0xd4341b0b 00:28:16.615 [2024-06-09 09:06:39.024109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:1582880 sqhd:6940 p:0 m:0 dnr:0 00:28:16.615 [2024-06-09 09:06:39.024476] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.024510] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.024707] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.024732] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.024921] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.024934] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.025112] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.025124] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.025301] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.025313] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.025489] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.025501] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.615 [2024-06-09 09:06:39.025676] bdev_nvme.c:1609:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:28:16.615 [2024-06-09 09:06:39.025777] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:16.616 [2024-06-09 09:06:39.025807] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.025946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.025997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026072] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:16.616 [2024-06-09 09:06:39.026088] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.026225] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.026357] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.026492] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.026620] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.026754] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.026869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.036607] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.036630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.616 [2024-06-09 09:06:39.036638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:10ea890 sqhd:b1c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.036646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.616 [2024-06-09 09:06:39.036669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:10ea890 sqhd:b1c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.036676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.616 [2024-06-09 09:06:39.036682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:10ea890 sqhd:b1c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.036689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.616 [2024-06-09 09:06:39.036695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:10ea890 sqhd:b1c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.036843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:16.616 [2024-06-09 09:06:39.036852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:16.616 [2024-06-09 09:06:39.036883] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.036891] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.036896] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:16.616 [2024-06-09 09:06:39.036910] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.036917] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.036921] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:28:16.616 [2024-06-09 09:06:39.036932] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.036938] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.036942] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:28:16.616 [2024-06-09 09:06:39.036955] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.036961] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.036966] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:28:16.616 [2024-06-09 09:06:39.036976] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.036982] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.036987] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:28:16.616 [2024-06-09 09:06:39.036997] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.037003] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.037007] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:28:16.616 [2024-06-09 09:06:39.037017] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.616 [2024-06-09 09:06:39.037023] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.616 [2024-06-09 09:06:39.037027] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:28:16.616 [2024-06-09 09:06:39.045428] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.616 [2024-06-09 09:06:39.055469] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.616 [2024-06-09 09:06:39.065517] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.616 [2024-06-09 09:06:39.075542] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.616 [2024-06-09 09:06:39.085584] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:16.616 [2024-06-09 09:06:39.087803] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:28:16.616 [2024-06-09 09:06:39.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.087989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.616 [2024-06-09 09:06:39.087999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x336bad47 00:28:16.616 [2024-06-09 09:06:39.088005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x336bad47 00:28:16.617 [2024-06-09 09:06:39.088020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x336bad47 00:28:16.617 [2024-06-09 09:06:39.088037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x88afdef0 00:28:16.617 [2024-06-09 09:06:39.088542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x7c4c4c2a 00:28:16.617 [2024-06-09 09:06:39.088559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x7c4c4c2a 00:28:16.617 [2024-06-09 09:06:39.088575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x7c4c4c2a 00:28:16.617 [2024-06-09 09:06:39.088590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.617 [2024-06-09 09:06:39.088600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x7c4c4c2a 00:28:16.618 [2024-06-09 09:06:39.088859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.088869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x336bad47 00:28:16.618 [2024-06-09 09:06:39.088876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:8192 cdw0:14f6050 sqhd:61c0 p:0 m:0 dnr:0 00:28:16.618 [2024-06-09 09:06:39.089792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:16.618 [2024-06-09 09:06:39.091972] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:16.618 [2024-06-09 09:06:39.091988] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:16.618 [2024-06-09 09:06:39.091995] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:28:17.556 [2024-06-09 09:06:40.039619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.556 [2024-06-09 09:06:40.039643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:17.556 [2024-06-09 09:06:40.039783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.556 [2024-06-09 09:06:40.039793] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:17.556 [2024-06-09 09:06:40.039904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.556 [2024-06-09 09:06:40.039913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:17.556 [2024-06-09 09:06:40.040022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.557 [2024-06-09 09:06:40.040030] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:17.557 [2024-06-09 09:06:40.040138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.557 [2024-06-09 09:06:40.040146] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:17.557 [2024-06-09 09:06:40.040255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.557 [2024-06-09 09:06:40.040263] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:17.557 [2024-06-09 09:06:40.040372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.557 [2024-06-09 09:06:40.040384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:17.557 [2024-06-09 09:06:40.040403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040410] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040417] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040439] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040459] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040472] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040478] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040486] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040497] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040511] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040516] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:17.557 [2024-06-09 09:06:40.040530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:17.557 [2024-06-09 09:06:40.040536] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:28:17.557 [2024-06-09 09:06:40.040553] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040560] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040565] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040570] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040576] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040582] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.040587] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.557 [2024-06-09 09:06:40.042507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042518] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042539] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:17.557 [2024-06-09 09:06:40.042562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:17.557 [2024-06-09 09:06:40.052708] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052730] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052737] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:28:17.557 [2024-06-09 09:06:40.052750] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052757] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052762] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:28:17.557 [2024-06-09 09:06:40.052773] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052779] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052784] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:28:17.557 [2024-06-09 09:06:40.052795] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052801] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052807] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:28:17.557 [2024-06-09 09:06:40.052817] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052824] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052828] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:28:17.557 [2024-06-09 09:06:40.052839] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052844] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.557 [2024-06-09 09:06:40.052849] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:28:17.557 [2024-06-09 09:06:40.052860] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.557 [2024-06-09 09:06:40.052866] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.558 [2024-06-09 09:06:40.052870] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:17.558 [2024-06-09 09:06:40.094692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:17.558 [2024-06-09 09:06:40.094706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:17.558 [2024-06-09 09:06:40.094740] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:17.558 [2024-06-09 09:06:40.094750] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:17.558 [2024-06-09 09:06:40.094756] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:28:17.558 [2024-06-09 09:06:40.094775] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:17.558 [2024-06-09 09:06:40.101706] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:17.558 [2024-06-09 09:06:40.103783] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:17.558 [2024-06-09 09:06:40.103798] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:17.558 [2024-06-09 09:06:40.103803] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:28:18.936 [2024-06-09 09:06:41.055464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.055485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:18.936 [2024-06-09 09:06:41.055600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.055609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:18.936 [2024-06-09 09:06:41.055718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.055733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:18.936 [2024-06-09 09:06:41.055850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.055859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:18.936 [2024-06-09 09:06:41.055966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.055975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:18.936 [2024-06-09 09:06:41.056082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.056091] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:18.936 [2024-06-09 09:06:41.056198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.936 [2024-06-09 09:06:41.056206] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:18.936 [2024-06-09 09:06:41.056225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:18.936 [2024-06-09 09:06:41.056231] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:18.936 [2024-06-09 09:06:41.056238] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:28:18.936 [2024-06-09 09:06:41.056248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:18.936 [2024-06-09 09:06:41.056254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:18.936 [2024-06-09 09:06:41.056260] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:28:18.936 [2024-06-09 09:06:41.056268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:18.936 [2024-06-09 09:06:41.056274] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:18.936 [2024-06-09 09:06:41.056283] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:28:18.937 [2024-06-09 09:06:41.056292] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:18.937 [2024-06-09 09:06:41.056297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:18.937 [2024-06-09 09:06:41.056303] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:28:18.937 [2024-06-09 09:06:41.056310] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:18.937 [2024-06-09 09:06:41.056316] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:18.937 [2024-06-09 09:06:41.056322] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:28:18.937 [2024-06-09 09:06:41.056330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:18.937 [2024-06-09 09:06:41.056335] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:18.937 [2024-06-09 09:06:41.056340] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:28:18.937 [2024-06-09 09:06:41.056348] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:18.937 [2024-06-09 09:06:41.056353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:18.937 [2024-06-09 09:06:41.056359] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:18.937 [2024-06-09 09:06:41.056376] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056383] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056388] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056393] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056398] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056405] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056411] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.056611] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056645] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:18.937 [2024-06-09 09:06:41.056659] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:18.937 [2024-06-09 09:06:41.066349] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066367] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066373] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:18.937 [2024-06-09 09:06:41.066385] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066411] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066416] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:28:18.937 [2024-06-09 09:06:41.066427] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066433] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066438] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:28:18.937 [2024-06-09 09:06:41.066449] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066455] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066460] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:28:18.937 [2024-06-09 09:06:41.066470] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066476] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066481] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:28:18.937 [2024-06-09 09:06:41.066491] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066497] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066502] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:28:18.937 [2024-06-09 09:06:41.066512] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.066518] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.066523] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:28:18.937 [2024-06-09 09:06:41.106515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:18.937 [2024-06-09 09:06:41.106528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:18.937 [2024-06-09 09:06:41.106556] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:18.937 [2024-06-09 09:06:41.106562] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:18.937 [2024-06-09 09:06:41.106569] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:28:18.937 [2024-06-09 09:06:41.106581] bdev_nvme.c:2890:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:18.937 [2024-06-09 09:06:41.106613] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.937 [2024-06-09 09:06:41.116621] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:18.937 [2024-06-09 09:06:41.118666] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:18.937 [2024-06-09 09:06:41.118681] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:18.937 [2024-06-09 09:06:41.118687] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:28:19.875 [2024-06-09 09:06:42.069173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069473] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069595] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.875 [2024-06-09 09:06:42.069959] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:19.875 [2024-06-09 09:06:42.069979] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.069986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.069993] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070019] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070028] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070039] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070058] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070067] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070078] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070095] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070101] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:19.875 [2024-06-09 09:06:42.070115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:19.875 [2024-06-09 09:06:42.070120] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:28:19.875 [2024-06-09 09:06:42.070139] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070146] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070152] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070157] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070163] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070168] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.070174] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.875 [2024-06-09 09:06:42.071608] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071673] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:19.875 [2024-06-09 09:06:42.071687] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:19.875 [2024-06-09 09:06:42.082670] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.875 [2024-06-09 09:06:42.082693] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082699] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:28:19.876 [2024-06-09 09:06:42.082714] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082721] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082732] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:28:19.876 [2024-06-09 09:06:42.082743] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082749] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082754] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:28:19.876 [2024-06-09 09:06:42.082765] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082771] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082776] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:28:19.876 [2024-06-09 09:06:42.082789] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082796] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082801] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:28:19.876 [2024-06-09 09:06:42.082812] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082819] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082824] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:28:19.876 [2024-06-09 09:06:42.082835] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.082842] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.082847] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:28:19.876 [2024-06-09 09:06:42.121460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:19.876 [2024-06-09 09:06:42.121477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:19.876 [2024-06-09 09:06:42.121507] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:19.876 [2024-06-09 09:06:42.121514] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:19.876 [2024-06-09 09:06:42.121520] nvme_ctrlr.c:1031:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:28:19.876 [2024-06-09 09:06:42.121541] bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:19.876 [2024-06-09 09:06:42.130655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:19.876 [2024-06-09 09:06:42.132704] nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:19.876 [2024-06-09 09:06:42.132720] nvme_rdma.c:1167:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:19.876 [2024-06-09 09:06:42.132731] nvme_rdma.c:2743:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:28:20.444 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 973: 1462578 Aborted (core dumped) "${NVMF_APP[@]}" "$@" 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # trap - ERR 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # print_backtrace 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1152 -- # [[ ehxBET =~ e ]] 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1154 -- # args=('1462578' 'nvmf_shutdown_tc3' 'nvmf_shutdown_tc3' '--transport=rdma') 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1154 -- # local args 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1156 -- # xtrace_disable 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:20.444 ========== Backtrace start: ========== 00:28:20.444 00:28:20.444 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh:973 -> killprocess(["1462578"]) 00:28:20.444 ... 00:28:20.444 968 kill $1 00:28:20.444 969 fi 00:28:20.444 970 00:28:20.444 971 # wait for the process regardless if its the dummy sudo one 00:28:20.444 972 # or the actual app - it should terminate anyway 00:28:20.444 => 973 wait $1 00:28:20.444 974 else 00:28:20.444 975 # the process is not there anymore 00:28:20.444 976 echo "Process with pid $1 is not found" 00:28:20.444 977 fi 00:28:20.444 978 } 00:28:20.444 ... 00:28:20.444 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh:135 -> nvmf_shutdown_tc3([]) 00:28:20.444 ... 00:28:20.444 130 trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:20.444 131 00:28:20.444 132 waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:20.444 133 00:28:20.444 134 # Kill the target half way through 00:28:20.444 => 135 killprocess $nvmfpid 00:28:20.444 136 nvmfpid= 00:28:20.444 137 00:28:20.444 138 # Verify bdevperf exits successfully 00:28:20.444 139 sleep 1 00:28:20.444 140 # TODO: Right now the NVMe-oF initiator will not correctly detect broken connections 00:28:20.444 ... 00:28:20.444 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh:1124 -> run_test(["nvmf_shutdown_tc3"],["nvmf_shutdown_tc3"]) 00:28:20.444 ... 00:28:20.444 1119 timing_enter $test_name 00:28:20.444 1120 echo "************************************" 00:28:20.444 1121 echo "START TEST $test_name" 00:28:20.444 1122 echo "************************************" 00:28:20.444 1123 xtrace_restore 00:28:20.444 1124 time "$@" 00:28:20.444 1125 xtrace_disable 00:28:20.444 1126 echo "************************************" 00:28:20.444 1127 echo "END TEST $test_name" 00:28:20.444 1128 echo "************************************" 00:28:20.444 1129 timing_exit $test_name 00:28:20.444 ... 00:28:20.444 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh:149 -> main(["--transport=rdma"]) 00:28:20.444 ... 00:28:20.444 144 stoptarget 00:28:20.444 145 } 00:28:20.444 146 00:28:20.444 147 run_test "nvmf_shutdown_tc1" nvmf_shutdown_tc1 00:28:20.444 148 run_test "nvmf_shutdown_tc2" nvmf_shutdown_tc2 00:28:20.444 => 149 run_test "nvmf_shutdown_tc3" nvmf_shutdown_tc3 00:28:20.444 150 00:28:20.444 151 trap - SIGINT SIGTERM EXIT 00:28:20.444 ... 00:28:20.444 00:28:20.444 ========== Backtrace end ========== 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1193 -- # return 0 00:28:20.444 00:28:20.444 real 0m7.985s 00:28:20.444 user 0m13.032s 00:28:20.444 sys 0m0.892s 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1 -- # process_shm --id 0 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@807 -- # type=--id 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@808 -- # id=0 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@809 -- # '[' --id = --pid ']' 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@813 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@813 -- # shm_files=nvmf_trace.0 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@815 -- # [[ -z nvmf_trace.0 ]] 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@819 -- # for n in $shm_files 00:28:20.444 09:06:42 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@820 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:20.444 nvmf_trace.0 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@822 -- # return 0 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1 -- # kill -9 1462857 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1 -- # nvmftestfini 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:28:20.702 rmmod nvme_rdma 00:28:20.702 rmmod nvme_fabrics 00:28:20.702 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1462857 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n 1462578 ']' 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # killprocess 1462578 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@949 -- # '[' -z 1462578 ']' 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # kill -0 1462578 00:28:20.702 /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh: line 953: kill: (1462578) - No such process 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # echo 'Process with pid 1462578 is not found' 00:28:20.702 Process with pid 1462578 is not found 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1 -- # exit 1 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # trap - ERR 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1124 -- # print_backtrace 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1152 -- # [[ ehxBET =~ e ]] 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1154 -- # args=('--transport=rdma' '/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh' 'nvmf_shutdown' '--transport=rdma') 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1154 -- # local args 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1156 -- # xtrace_disable 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:20.702 ========== Backtrace start: ========== 00:28:20.702 00:28:20.702 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh:1124 -> run_test(["nvmf_shutdown"],["/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/target/shutdown.sh"],["--transport=rdma"]) 00:28:20.702 ... 00:28:20.702 1119 timing_enter $test_name 00:28:20.702 1120 echo "************************************" 00:28:20.702 1121 echo "START TEST $test_name" 00:28:20.702 1122 echo "************************************" 00:28:20.702 1123 xtrace_restore 00:28:20.702 1124 time "$@" 00:28:20.702 1125 xtrace_disable 00:28:20.702 1126 echo "************************************" 00:28:20.702 1127 echo "END TEST $test_name" 00:28:20.702 1128 echo "************************************" 00:28:20.702 1129 timing_exit $test_name 00:28:20.702 ... 00:28:20.702 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh:82 -> main(["--transport=rdma"]) 00:28:20.702 ... 00:28:20.702 77 fi 00:28:20.702 78 elif [[ $SPDK_TEST_NVMF_TRANSPORT == "rdma" ]]; then 00:28:20.702 79 run_test "nvmf_device_removal" test/nvmf/target/device_removal.sh "${TEST_ARGS[@]}" 00:28:20.702 80 run_test "nvmf_srq_overwhelm" "$rootdir/test/nvmf/target/srq_overwhelm.sh" "${TEST_ARGS[@]}" 00:28:20.702 81 fi 00:28:20.702 => 82 run_test "nvmf_shutdown" $rootdir/test/nvmf/target/shutdown.sh "${TEST_ARGS[@]}" 00:28:20.702 83 fi 00:28:20.702 84 00:28:20.702 85 timing_exit target 00:28:20.702 86 00:28:20.702 87 timing_enter host 00:28:20.702 ... 00:28:20.702 00:28:20.702 ========== Backtrace end ========== 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1193 -- # return 0 00:28:20.702 00:28:20.702 real 0m25.778s 00:28:20.702 user 1m10.562s 00:28:20.702 sys 0m8.045s 00:28:20.702 09:06:43 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1 -- # exit 1 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1124 -- # trap - ERR 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1124 -- # print_backtrace 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1152 -- # [[ ehxBET =~ e ]] 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1154 -- # args=('--transport=rdma' '/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_rdma' '/var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf') 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1154 -- # local args 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@1156 -- # xtrace_disable 00:28:20.702 09:06:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:20.702 ========== Backtrace start: ========== 00:28:20.702 00:28:20.702 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/common/autotest_common.sh:1124 -> run_test(["nvmf_rdma"],["/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=rdma"]) 00:28:20.702 ... 00:28:20.702 1119 timing_enter $test_name 00:28:20.702 1120 echo "************************************" 00:28:20.702 1121 echo "START TEST $test_name" 00:28:20.702 1122 echo "************************************" 00:28:20.702 1123 xtrace_restore 00:28:20.702 1124 time "$@" 00:28:20.702 1125 xtrace_disable 00:28:20.702 1126 echo "************************************" 00:28:20.702 1127 echo "END TEST $test_name" 00:28:20.702 1128 echo "************************************" 00:28:20.702 1129 timing_exit $test_name 00:28:20.702 ... 00:28:20.702 in /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/autotest.sh:284 -> main(["/var/jenkins/workspace/nvmf-cvl-phy-autotest/autorun-spdk.conf"]) 00:28:20.702 ... 00:28:20.702 279 if [ $SPDK_TEST_NVMF -eq 1 ]; then 00:28:20.702 280 export NET_TYPE 00:28:20.702 281 # The NVMe-oF run test cases are split out like this so that the parser that compiles the 00:28:20.702 282 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:28:20.702 283 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:28:20.702 => 284 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:28:20.702 285 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:28:20.702 286 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:28:20.702 287 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:28:20.702 288 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:28:20.703 289 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:28:20.703 ... 00:28:20.703 00:28:20.703 ========== Backtrace end ========== 00:28:20.703 09:06:43 nvmf_rdma -- common/autotest_common.sh@1193 -- # return 0 00:28:20.703 00:28:20.703 real 22m5.091s 00:28:20.703 user 65m11.861s 00:28:20.703 sys 3m54.648s 00:28:20.703 09:06:43 nvmf_rdma -- common/autotest_common.sh@1 -- # autotest_cleanup 00:28:20.703 09:06:43 nvmf_rdma -- common/autotest_common.sh@1391 -- # local autotest_es=1 00:28:20.703 09:06:43 nvmf_rdma -- common/autotest_common.sh@1392 -- # xtrace_disable 00:28:20.703 09:06:43 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:30.715 ##### CORE BT nvmf_tgt_1462578.core.bt.txt ##### 00:28:30.715 00:28:30.715 gdb: warning: Couldn't determine a path for the index cache directory. 00:28:30.715 [New LWP 1462582] 00:28:30.715 [New LWP 1462578] 00:28:30.715 [New LWP 1462583] 00:28:30.715 [New LWP 1462581] 00:28:30.715 [New LWP 1462580] 00:28:30.715 [New LWP 1462584] 00:28:30.715 [Thread debugging using libthread_db enabled] 00:28:30.715 Using host libthread_db library "/usr/lib64/libthread_db.so.1". 00:28:30.715 Core was generated by `/var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0x'. 00:28:30.715 Program terminated with signal SIGABRT, Aborted. 00:28:30.715 #0 0x00007f7e125c3884 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:28:30.715 [Current thread is 1 (Thread 0x7f7e100006c0 (LWP 1462582))] 00:28:30.715 00:28:30.715 Thread 6 (Thread 0x7f7e0ec006c0 (LWP 1462584)): 00:28:30.715 #0 0x00007f7e126361ad in write () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #1 0x00007f7e11e442e3 in rdma_disconnect (id=0xf85c00) at /usr/src/debug/rdma-core-46.0-1.fc38.x86_64/librdmacm/cma.c:2044 00:28:30.715 cmd = {cmd = 10, in = 4, out = 0, id = 13} 00:28:30.715 id_priv = 0xf85c00 00:28:30.715 ret = 00:28:30.715 #2 rdma_disconnect (id=0xf85c00) at /usr/src/debug/rdma-core-46.0-1.fc38.x86_64/librdmacm/cma.c:2030 00:28:30.715 cmd = 00:28:30.715 id_priv = 00:28:30.715 ret = 00:28:30.715 #3 0x00007f7e134c51d6 in spdk_rdma_qp_disconnect (spdk_rdma_qp=0x7f7df800fec0) at rdma_verbs.c:105 00:28:30.715 rc = 0 00:28:30.715 __PRETTY_FUNCTION__ = "spdk_rdma_qp_disconnect" 00:28:30.715 __func__ = "spdk_rdma_qp_disconnect" 00:28:30.715 #4 0x00007f7e13ae9c1e in nvmf_rdma_close_qpair (qpair=0xf85e60, cb_fn=0x7f7e139f5d06 <_nvmf_transport_qpair_fini_complete>, cb_arg=0x7f7df8052970) at rdma.c:4411 00:28:30.715 rqpair = 0xf85e60 00:28:30.715 #5 0x00007f7e13a327d8 in nvmf_transport_qpair_fini (qpair=0xf85e60, cb_fn=0x7f7e139f5d06 <_nvmf_transport_qpair_fini_complete>, cb_arg=0x7f7df8052970) at transport.c:750 00:28:30.715 No locals. 00:28:30.715 #6 0x00007f7e139f7c1c in _nvmf_qpair_destroy (ctx=0x7f7df8052970, status=0) at nvmf.c:1364 00:28:30.715 qpair_ctx = 0x7f7df8052970 00:28:30.715 qpair = 0xf85e60 00:28:30.715 ctrlr = 0x7f7df825d500 00:28:30.715 sgroup = 0x7f7df8000fe8 00:28:30.715 sid = 0 00:28:30.715 __PRETTY_FUNCTION__ = "_nvmf_qpair_destroy" 00:28:30.715 #7 0x00007f7e13979dd6 in nvmf_qpair_request_cleanup (qpair=0xf85e60) at ctrlr.c:4441 00:28:30.715 __PRETTY_FUNCTION__ = "nvmf_qpair_request_cleanup" 00:28:30.715 #8 0x00007f7e1397a1fa in spdk_nvmf_request_free (req=0x200019811008) at ctrlr.c:4456 00:28:30.715 qpair = 0xf85e60 00:28:30.715 __func__ = "spdk_nvmf_request_free" 00:28:30.715 #9 0x00007f7e13974c90 in nvmf_qpair_free_aer (qpair=0xf85e60) at ctrlr.c:3987 00:28:30.715 ctrlr = 0x7f7df825d500 00:28:30.715 i = 3 00:28:30.715 __PRETTY_FUNCTION__ = "nvmf_qpair_free_aer" 00:28:30.715 #10 0x00007f7e139f825f in spdk_nvmf_qpair_disconnect (qpair=0xf85e60) at nvmf.c:1424 00:28:30.715 group = 0x7f7df8000c00 00:28:30.715 qpair_ctx = 0x7f7df8052970 00:28:30.715 __PRETTY_FUNCTION__ = "spdk_nvmf_qpair_disconnect" 00:28:30.715 __func__ = "spdk_nvmf_qpair_disconnect" 00:28:30.715 #11 0x00007f7e139fafab in nvmf_poll_group_remove_subsystem_msg (ctx=0x7f7df8052940) at nvmf.c:1686 00:28:30.715 qpair = 0xf85e60 00:28:30.715 qpair_tmp = 0x1205300 00:28:30.715 subsystem = 0x1269ab0 00:28:30.715 group = 0x7f7df8000c00 00:28:30.715 qpair_ctx = 0x7f7df8052940 00:28:30.715 qpairs_found = true 00:28:30.715 rc = 0 00:28:30.715 #12 0x00007f7e139fb5e6 in nvmf_poll_group_remove_subsystem (group=0x7f7df8000c00, subsystem=0x1269ab0, cb_fn=0x7f7e139bfc8d , cb_arg=0x10c7ec0) at nvmf.c:1735 00:28:30.715 sgroup = 0x7f7df8000fe8 00:28:30.715 ctx = 0x7f7df8052940 00:28:30.715 i = 32 00:28:30.715 __func__ = "nvmf_poll_group_remove_subsystem" 00:28:30.715 #13 0x00007f7e139bfe11 in subsystem_state_change_on_pg (i=0x10c7ec0) at subsystem.c:694 00:28:30.715 ctx = 0x126e620 00:28:30.715 ch = 0x7f7df8000ba0 00:28:30.715 group = 0x7f7df8000c00 00:28:30.715 __PRETTY_FUNCTION__ = "subsystem_state_change_on_pg" 00:28:30.715 #14 0x00007f7e130cbc43 in _call_channel (ctx=0x10c7ec0) at thread.c:2562 00:28:30.715 i = 0x10c7ec0 00:28:30.715 ch = 0x7f7df8000ba0 00:28:30.715 #15 0x00007f7e130bb615 in msg_queue_run_batch (thread=0xf73150, max_msgs=8) at thread.c:854 00:28:30.715 msg = 0x2000040bf2c0 00:28:30.715 count = 1 00:28:30.715 i = 0 00:28:30.715 messages = {0x2000040bf2c0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0} 00:28:30.715 notify = 1 00:28:30.715 rc = 0 00:28:30.715 __func__ = "msg_queue_run_batch" 00:28:30.715 __PRETTY_FUNCTION__ = "msg_queue_run_batch" 00:28:30.715 #16 0x00007f7e130bea88 in thread_poll (thread=0xf73150, max_msgs=0, now=3624548066536832) at thread.c:1076 00:28:30.715 msg_count = 0 00:28:30.715 poller = 0xce081dbf9ff2e 00:28:30.715 tmp = 0xf73150 00:28:30.715 critical_msg = 0x0 00:28:30.715 rc = 0 00:28:30.715 #17 0x00007f7e130bf963 in spdk_thread_poll (thread=0xf73150, max_msgs=0, now=3624548066536832) at thread.c:1173 00:28:30.715 orig_thread = 0x0 00:28:30.715 rc = 0 00:28:30.715 #18 0x00007f7e134853d5 in _reactor_run (reactor=0xf67d40) at reactor.c:906 00:28:30.715 thread = 0xf73150 00:28:30.715 lw_thread = 0xf73498 00:28:30.715 tmp = 0x0 00:28:30.715 now = 3624548066536832 00:28:30.715 rc = 0 00:28:30.715 #19 0x00007f7e13485a25 in reactor_run (arg=0xf67d40) at reactor.c:944 00:28:30.715 reactor = 0xf67d40 00:28:30.715 thread = 0x7f7e126361c1 00:28:30.715 lw_thread = 0x0 00:28:30.715 tmp = 0x0 00:28:30.715 thread_name = "reactor_4\000\000\000\000\000\000\000\203\351\277\016~\177\000\000\001\000\000\000\000\000\000" 00:28:30.715 last_sched = 0 00:28:30.715 __func__ = "reactor_run" 00:28:30.715 #20 0x00007f7e12de7925 in eal_thread_loop (arg=0x4) at ../lib/eal/common/eal_common_thread.c:212 00:28:30.715 f = 0x7f7e134857b4 00:28:30.715 fct_arg = 0xf67d40 00:28:30.715 lcore_id = 4 00:28:30.715 cpuset = "4", '\000' 00:28:30.715 ret = 0 00:28:30.715 #21 0x00007f7e12e02c5e in eal_worker_thread_loop (arg=0x4) at ../lib/eal/linux/eal.c:916 00:28:30.715 No locals. 00:28:30.715 #22 0x00007f7e125c1947 in start_thread () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #23 0x00007f7e12647970 in clone3 () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 00:28:30.715 Thread 5 (Thread 0x7f7e114006c0 (LWP 1462580)): 00:28:30.715 #0 0x00007f7e12647d72 in epoll_wait () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #1 0x00007f7e12e0985c in eal_intr_handle_interrupts (pfd=7, totalfds=1) at ../lib/eal/linux/eal_interrupts.c:1077 00:28:30.715 events = {{events = 0, data = {ptr = 0x0, fd = 0, u32 = 0, u64 = 0}}} 00:28:30.715 nfds = 0 00:28:30.715 #2 0x00007f7e12e09a99 in eal_intr_thread_main (arg=0x0) at ../lib/eal/linux/eal_interrupts.c:1163 00:28:30.715 pipe_event = {events = 3, data = {ptr = 0x5, fd = 5, u32 = 5, u64 = 5}} 00:28:30.715 src = 0x0 00:28:30.715 numfds = 1 00:28:30.715 pfd = 7 00:28:30.715 __func__ = "eal_intr_thread_main" 00:28:30.715 #3 0x00007f7e12de7cd7 in control_thread_start (arg=0xefd400) at ../lib/eal/common/eal_common_thread.c:282 00:28:30.715 params = 0xefd400 00:28:30.715 start_arg = 0x0 00:28:30.715 start_routine = 0x7f7e12e098cb 00:28:30.715 #4 0x00007f7e12e00a2a in thread_start_wrapper (arg=0x7fffd31fd0b0) at ../lib/eal/unix/rte_thread.c:114 00:28:30.715 ctx = 0x7fffd31fd0b0 00:28:30.715 thread_func = 0x7f7e12de7c88 00:28:30.715 thread_args = 0xefd400 00:28:30.715 ret = 0 00:28:30.715 #5 0x00007f7e125c1947 in start_thread () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #6 0x00007f7e12647970 in clone3 () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 00:28:30.715 Thread 4 (Thread 0x7f7e10a006c0 (LWP 1462581)): 00:28:30.715 #0 0x00007f7e1264988b in recvmsg () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #1 0x00007f7e12df6db9 in read_msg (fd=10, m=0x7f7e109fe9f0, s=0x7f7e109fe980) at ../lib/eal/common/eal_common_proc.c:284 00:28:30.715 msglen = 0 00:28:30.715 iov = {iov_base = 0x7f7e109fe9f0, iov_len = 332} 00:28:30.715 msgh = {msg_name = 0x7f7e109fe980, msg_namelen = 110, msg_iov = 0x7f7e109fe940, msg_iovlen = 1, msg_control = 0x7f7e109fe8d0, msg_controllen = 48, msg_flags = 0} 00:28:30.715 control = '\000' 00:28:30.715 cmsg = 0x0 00:28:30.715 buflen = 332 00:28:30.715 #2 0x00007f7e12df7306 in mp_handle (arg=0x0) at ../lib/eal/common/eal_common_proc.c:410 00:28:30.715 ret = 316925504 00:28:30.715 msg = {type = 0, msg = {name = '\000' , len_param = 0, num_fds = 0, param = '\000' , "\001\000\000\000\320\352\237\020~\177\000\000=p\336\022~\177\000\000\000\000\000\000\000\000\000\000\210\032\344\022~\177", '\000' , "\001\000\000\000\030\000\000\000\000\000\000\000\377\377\377\377\030\000\000\000\020\353\237\020~\177\000\000\300p\336\022~\177\000\000\020\353\237\020~\177\000\000\262\241\337\022~\177\000\000\020\353\237\020~\177\000\000ho\336\022\377\377\377\377@p\343\022"..., fds = {32638, 0, 0, 15717376, 0, 0, 0, 15717376}}} 00:28:30.715 sa = {sun_family = 0, sun_path = '\000' } 00:28:30.715 fd = 10 00:28:30.715 #3 0x00007f7e12de7cd7 in control_thread_start (arg=0xefd400) at ../lib/eal/common/eal_common_thread.c:282 00:28:30.715 params = 0xefd400 00:28:30.715 start_arg = 0x0 00:28:30.715 start_routine = 0x7f7e12df72d7 00:28:30.715 #4 0x00007f7e12e00a2a in thread_start_wrapper (arg=0x7fffd31fc0c0) at ../lib/eal/unix/rte_thread.c:114 00:28:30.715 ctx = 0x7fffd31fc0c0 00:28:30.715 thread_func = 0x7f7e12de7c88 00:28:30.715 thread_args = 0xefd400 00:28:30.715 ret = 0 00:28:30.715 #5 0x00007f7e125c1947 in start_thread () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 #6 0x00007f7e12647970 in clone3 () from /usr/lib64/libc.so.6 00:28:30.715 No symbol table info available. 00:28:30.715 00:28:30.715 Thread 3 (Thread 0x7f7e0f6006c0 (LWP 1462583)): 00:28:30.715 #0 0x00007f7e13adecdc in nvmf_rdma_qpair_process_pending (rtransport=0xf8adc0, rqpair=0x10c8b10, drain=false) at rdma.c:3368 00:28:30.715 req = 0x0 00:28:30.715 tmp = 0x7f7e0f6006c0 00:28:30.715 rdma_req = 0x0 00:28:30.715 req_tmp = 0x30 00:28:30.715 resources = 0x7f7e00262d00 00:28:30.715 #1 0x00007f7e13aef35b in nvmf_rdma_poller_poll (rtransport=0xf8adc0, rpoller=0x7f7e0000eee0) at rdma.c:4771 00:28:30.716 wc = {{wr_id = 35184376690906, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 17599248, byte_len = 16, {imm_data = 222280912, invalidated_rkey = 222280912}, qp_num = 415, src_qp = 415, wc_flags = 0, pkey_index = 57536, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184376641658, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 220014940, byte_len = 16, {imm_data = 99, invalidated_rkey = 99}, qp_num = 415, src_qp = 415, wc_flags = 0, pkey_index = 0, slid = 1, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184492674560, status = 257942768, opcode = 32638, vendor_err = 0, byte_len = 0, {imm_data = 8, invalidated_rkey = 8}, qp_num = 0, src_qp = 257943440, wc_flags = 32638, pkey_index = 65024, slid = 2895, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184492674560, status = 257942768, opcode = 32638, vendor_err = 8, byte_len = 8, {imm_data = 8, invalidated_rkey = 8}, qp_num = 8192, src_qp = 257942768, wc_flags = 32638, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184492674560, status = 257941752, opcode = 32638, vendor_err = 257941760, byte_len = 32638, {imm_data = 257941768, invalidated_rkey = 257941768}, qp_num = 32638, src_qp = 120585728, wc_flags = 8192, pkey_index = 58608, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184492674560, status = 257942768, opcode = 32638, vendor_err = 120586112, byte_len = 8192, {imm_data = 257942768, invalidated_rkey = 257942768}, qp_num = 32638, src_qp = 257941824, wc_flags = 32638, pkey_index = 57672, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184561880576, status = 257942816, opcode = 32638, vendor_err = 189791744, byte_len = 8192, {imm_data = 8, invalidated_rkey = 8}, qp_num = 0, src_qp = 257943632, wc_flags = 32638, pkey_index = 8, slid = 0, sl = 8 '\b', dlid_path_bits = 0 '\000'}, {wr_id = 8, status = IBV_WC_LOC_LEN_ERR, opcode = IBV_WC_RDMA_WRITE, vendor_err = 1, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 8, src_qp = 1, wc_flags = 113, pkey_index = 8, slid = 0, sl = 1 '\001', dlid_path_bits = 0 '\000'}, {wr_id = 4294967409, status = 131072, opcode = 114, vendor_err = 1, byte_len = 8192, {imm_data = 189791876, invalidated_rkey = 189791876}, qp_num = 8192, src_qp = 415, wc_flags = 0, pkey_index = 65024, slid = 1839, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184376652602, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 16346080, byte_len = 16, {imm_data = 61040, invalidated_rkey = 61040}, qp_num = 113, src_qp = 114, wc_flags = 1, pkey_index = 0, slid = 0, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184376651234, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 120585728, byte_len = 16, {imm_data = 257943008, invalidated_rkey = 257943008}, qp_num = 415, src_qp = 415, wc_flags = 0, pkey_index = 8, slid = 0, sl = 8 '\b', dlid_path_bits = 0 '\000'}, {wr_id = 8, status = IBV_WC_LOC_LEN_ERR, opcode = IBV_WC_RDMA_WRITE, vendor_err = 1, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 8, src_qp = 415, wc_flags = 0, pkey_index = 39470, slid = 5038, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184376622648, status = 4612240, opcode = 8192, vendor_err = 183, byte_len = 64, {imm_data = 16297408, invalidated_rkey = 16297408}, qp_num = 0, src_qp = 4612240, wc_flags = 8192, pkey_index = 35600, slid = 268, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 140179400549216, status = 15567520, opcode = IBV_WC_SEND, vendor_err = 257942368, byte_len = 32638, {imm_data = 329459317, invalidated_rkey = 329459317}, qp_num = 32638, src_qp = 415, wc_flags = 3, pkey_index = 35600, slid = 268, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 22932, status = 2368224, opcode = 32638, vendor_err = 257942464, byte_len = 32638, {imm_data = 328714112, invalidated_rkey = 328714112}, qp_num = 32638, src_qp = 257942752, wc_flags = 32638, pkey_index = 24720, slid = 70, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 4342022272, status = 4128, opcode = 32638, vendor_err = 257942432, byte_len = 32638, {imm_data = 114, invalidated_rkey = 114}, qp_num = 32638, src_qp = 114, wc_flags = 0, pkey_index = 0, slid = 0, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184561880576, status = 257943440, opcode = 32638, vendor_err = 0, byte_len = 0, {imm_data = 189791744, invalidated_rkey = 189791744}, qp_num = 8192, src_qp = 257943440, wc_flags = 32638, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 2753402699776, status = 3690592860, opcode = 843905, vendor_err = 257942576, byte_len = 1, {imm_data = 18, invalidated_rkey = 18}, qp_num = 1, src_qp = 4, wc_flags = 4, pkey_index = 3, slid = 0, sl = 4 '\004', dlid_path_bits = 0 '\000'}, {wr_id = 1, status = 222280912, opcode = 32638, vendor_err = 223024136, byte_len = 32638, {imm_data = 257942544, invalidated_rkey = 257942544}, qp_num = 32638, src_qp = 4160752544, wc_flags = 32637, pkey_index = 58528, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 140179362693228, status = 16199104, opcode = IBV_WC_SEND, vendor_err = 257942656, byte_len = 32638, {imm_data = 1, invalidated_rkey = 1}, qp_num = 32638, src_qp = 257942688, wc_flags = 32638, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184381522432, status = 257943632, opcode = 32638, vendor_err = 0, byte_len = 0, {imm_data = 9433600, invalidated_rkey = 9433600}, qp_num = 8192, src_qp = 257943632, wc_flags = 32638, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 140179400549904, status = 319015140, opcode = 32638, vendor_err = 189791744, byte_len = 8192, {imm_data = 257943440, invalidated_rkey = 257943440}, qp_num = 32638, src_qp = 0, wc_flags = 0, pkey_index = 65024, slid = 2895, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 140179400550288, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 189791744, byte_len = 8192, {imm_data = 257942424, invalidated_rkey = 257942424}, qp_num = 32638, src_qp = 257942432, wc_flags = 32638, pkey_index = 58280, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 140179365631208, status = 257942768, opcode = 32638, vendor_err = 220086350, byte_len = 32638, {imm_data = 223024360, invalidated_rkey = 223024360}, qp_num = 32638, src_qp = 220086380, wc_flags = 32638, pkey_index = 48, slid = 0, sl = 48 '0', dlid_path_bits = 0 '\000'}, {wr_id = 140179400549928, status = 257942848, opcode = 32638, vendor_err = 0, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 0, src_qp = 257884422, wc_flags = 195, pkey_index = 60348, slid = 56314, sl = 129 '\201', dlid_path_bits = 224 '\340'}, {wr_id = 4552910224, status = IBV_WC_TM_ERR, opcode = IBV_WC_RDMA_WRITE, vendor_err = 9433600, byte_len = 8192, {imm_data = 257943632, invalidated_rkey = 257943632}, qp_num = 32638, src_qp = 0, wc_flags = 0, pkey_index = 61952, slid = 143, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 140179400550480, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 9433600, byte_len = 8192, {imm_data = 257942616, invalidated_rkey = 257942616}, qp_num = 32638, src_qp = 257942624, wc_flags = 32638, pkey_index = 58472, slid = 3935, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 140179400550072, status = 257942992, opcode = 32638, vendor_err = 17596096, byte_len = 0, {imm_data = 1595635712, invalidated_rkey = 1595635712}, qp_num = 2829904551, src_qp = 19307184, wc_flags = 0, pkey_index = 60760, slid = 65535, sl = 255 '\377', dlid_path_bits = 255 '\377'}, {wr_id = 0, status = IBV_WC_LOC_PROT_ERR, opcode = IBV_WC_SEND, vendor_err = 336608, byte_len = 32638, {imm_data = 319513251, invalidated_rkey = 319513251}, qp_num = 32638, src_qp = 257943072, wc_flags = 32638, pkey_index = 6, slid = 4876, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 140179400550000, status = 319597857, opcode = 32638, vendor_err = 0, byte_len = 0, {imm_data = 334480, invalidated_rkey = 334480}, qp_num = 32638, src_qp = 336608, wc_flags = 32638, pkey_index = 11712, slid = 247, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 19305856, status = 336608, opcode = IBV_WC_SEND, vendor_err = 610016, byte_len = 32638, {imm_data = 0, invalidated_rkey = 0}, qp_num = 0, src_qp = 257943216, wc_flags = 32638, pkey_index = 30720, slid = 24347, sl = 167 '\247', dlid_path_bits = 234 '\352'}, {wr_id = 140179400550064, status = 4294962520, opcode = 4294967295, vendor_err = 0, byte_len = 0, {imm_data = 609904, invalidated_rkey = 609904}, qp_num = 32638, src_qp = 0, wc_flags = 0, pkey_index = 53200, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}} 00:28:30.716 rdma_wr = 0x2000004638da 00:28:30.716 rdma_req = 0x2000004635d0 00:28:30.716 rdma_recv = 0x100450000 00:28:30.716 rqpair = 0x10c8b10 00:28:30.716 tmp_rqpair = 0x100000016 00:28:30.716 reaped = 1 00:28:30.716 i = 0 00:28:30.716 count = 1 00:28:30.716 rc = 32638 00:28:30.716 error = false 00:28:30.716 poll_tsc = 3624548067366638 00:28:30.716 __func__ = "nvmf_rdma_poller_poll" 00:28:30.716 __PRETTY_FUNCTION__ = "nvmf_rdma_poller_poll" 00:28:30.716 #2 0x00007f7e13aefe42 in nvmf_rdma_poll_group_poll (group=0x7f7e0000ee70) at rdma.c:4859 00:28:30.716 rtransport = 0xf8adc0 00:28:30.716 rgroup = 0x7f7e0000ee70 00:28:30.716 rpoller = 0x7f7e0000eee0 00:28:30.716 tmp = 0x7f7e00051260 00:28:30.716 count = 0 00:28:30.716 rc = 0 00:28:30.716 rc2 = 0 00:28:30.716 #3 0x00007f7e13a3237b in nvmf_transport_poll_group_poll (group=0x7f7e0000ee70) at transport.c:728 00:28:30.716 No locals. 00:28:30.716 #4 0x00007f7e139eae3d in nvmf_poll_group_poll (ctx=0x7f7e00000c00) at nvmf.c:157 00:28:30.716 group = 0x7f7e00000c00 00:28:30.716 rc = 0 00:28:30.716 count = 0 00:28:30.716 tgroup = 0x7f7e0000ee70 00:28:30.716 #5 0x00007f7e130bcf3e in thread_execute_poller (thread=0xf72dc0, poller=0x7f7e00000cc0) at thread.c:959 00:28:30.716 rc = 0 00:28:30.716 __PRETTY_FUNCTION__ = "thread_execute_poller" 00:28:30.716 __func__ = "thread_execute_poller" 00:28:30.716 #6 0x00007f7e130becc4 in thread_poll (thread=0xf72dc0, max_msgs=0, now=3624548067359856) at thread.c:1085 00:28:30.716 poller_rc = 0 00:28:30.716 msg_count = 0 00:28:30.716 poller = 0x7f7e00000cc0 00:28:30.716 tmp = 0x0 00:28:30.716 critical_msg = 0x0 00:28:30.716 rc = 0 00:28:30.716 #7 0x00007f7e130bf963 in spdk_thread_poll (thread=0xf72dc0, max_msgs=0, now=3624548067359856) at thread.c:1173 00:28:30.716 orig_thread = 0x0 00:28:30.716 rc = 0 00:28:30.716 #8 0x00007f7e134853d5 in _reactor_run (reactor=0xf67a80) at reactor.c:906 00:28:30.716 thread = 0xf72dc0 00:28:30.716 lw_thread = 0xf73108 00:28:30.716 tmp = 0x0 00:28:30.716 now = 3624548067359856 00:28:30.716 rc = 0 00:28:30.716 #9 0x00007f7e13485a25 in reactor_run (arg=0xf67a80) at reactor.c:944 00:28:30.716 reactor = 0xf67a80 00:28:30.716 thread = 0x7f7e126361c1 00:28:30.716 lw_thread = 0x0 00:28:30.716 tmp = 0x0 00:28:30.716 thread_name = "reactor_3\000\000\000\000\000\000\000\203\351_\017~\177\000\000\001\000\000\000\000\000\000" 00:28:30.716 last_sched = 0 00:28:30.716 __func__ = "reactor_run" 00:28:30.716 #10 0x00007f7e12de7925 in eal_thread_loop (arg=0x3) at ../lib/eal/common/eal_common_thread.c:212 00:28:30.716 f = 0x7f7e134857b4 00:28:30.716 fct_arg = 0xf67a80 00:28:30.716 lcore_id = 3 00:28:30.716 cpuset = "3", '\000' 00:28:30.716 ret = 0 00:28:30.716 #11 0x00007f7e12e02c5e in eal_worker_thread_loop (arg=0x3) at ../lib/eal/linux/eal.c:916 00:28:30.716 No locals. 00:28:30.716 #12 0x00007f7e125c1947 in start_thread () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #13 0x00007f7e12647970 in clone3 () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 00:28:30.716 Thread 2 (Thread 0x7f7e1140fa00 (LWP 1462578)): 00:28:30.716 #0 rte_get_timer_cycles () at /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk/dpdk/build/include/generic/rte_cycles.h:99 00:28:30.716 No locals. 00:28:30.716 #1 0x00007f7e13d3f540 in spdk_get_ticks () at env.c:298 00:28:30.716 No locals. 00:28:30.716 #2 0x00007f7e13aec1b3 in nvmf_rdma_poller_poll (rtransport=0xf8adc0, rpoller=0xf99730) at rdma.c:4625 00:28:30.716 wc = {{wr_id = 35184797913920, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 108, invalidated_rkey = 108}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 1, sl = 1 '\001', dlid_path_bits = 0 '\000'}, {wr_id = 35184797914024, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 330279424, invalidated_rkey = 330279424}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797914128, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 330279424, invalidated_rkey = 330279424}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 8, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 140736735466112, status = 426770048, opcode = 8192, vendor_err = 65537, byte_len = 0, {imm_data = 3542076360, invalidated_rkey = 3542076360}, qp_num = 411, src_qp = 8, wc_flags = 8, pkey_index = 8, slid = 0, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 35184797907680, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 3542077344, invalidated_rkey = 3542077344}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 65156, slid = 2895, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184797907784, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 32, invalidated_rkey = 32}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 52496, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 35184797907888, status = IBV_WC_LOC_ACCESS_ERR, opcode = IBV_WC_SEND, vendor_err = 3542078272, byte_len = 32767, {imm_data = 66056704, invalidated_rkey = 66056704}, qp_num = 8192, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 1, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 34359738376, status = IBV_WC_LOC_ACCESS_ERR, opcode = IBV_WC_FLUSH, vendor_err = 8, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 1, src_qp = 1, wc_flags = 0, pkey_index = 1, slid = 0, sl = 8 '\b', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908096, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 131072, invalidated_rkey = 131072}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908200, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 3542076648, invalidated_rkey = 3542076648}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 53456, slid = 54047, sl = 98 'b', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908304, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 8, invalidated_rkey = 8}, qp_num = 8, src_qp = 8, wc_flags = 0, pkey_index = 1, slid = 0, sl = 1 '\001', dlid_path_bits = 0 '\000'}, {wr_id = 1, status = IBV_WC_LOC_LEN_ERR, opcode = IBV_WC_FLUSH, vendor_err = 65537, byte_len = 0, {imm_data = 3542077696, invalidated_rkey = 3542077696}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 53504, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 35184797908512, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 3542077024, invalidated_rkey = 3542077024}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 35744, slid = 246, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908616, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 319878024, invalidated_rkey = 319878024}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 9888, slid = 247, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908720, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 16196832, invalidated_rkey = 16196832}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908824, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 0, invalidated_rkey = 0}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 99, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797908928, status = IBV_WC_SUCCESS, opcode = IBV_WC_RECV, vendor_err = 426770048, byte_len = 8192, {imm_data = 3542078080, invalidated_rkey = 3542078080}, qp_num = 32767, src_qp = 0, wc_flags = 0, pkey_index = 65152, slid = 6511, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 140736735466112, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 0, {imm_data = 3542077264, invalidated_rkey = 3542077264}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797909136, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 67300160, invalidated_rkey = 67300160}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 1168, slid = 4876, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184797909240, status = IBV_WC_LOC_QP_OP_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 0, invalidated_rkey = 0}, qp_num = 411, src_qp = 66056704, wc_flags = 8192, pkey_index = 54080, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 0, status = 66056704, opcode = 8192, vendor_err = 3542078272, byte_len = 32767, {imm_data = 0, invalidated_rkey = 0}, qp_num = 0, src_qp = 411, wc_flags = 0, pkey_index = 8192, slid = 3356, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184797909448, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 411, src_qp = 426770048, wc_flags = 8192, pkey_index = 53888, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 0, status = 426770048, opcode = 8192, vendor_err = 3542078080, byte_len = 32767, {imm_data = 0, invalidated_rkey = 0}, qp_num = 0, src_qp = 426770048, wc_flags = 8192, pkey_index = 52888, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}, {wr_id = 140736735465120, status = 3542077096, opcode = 32767, vendor_err = 65537, byte_len = 0, {imm_data = 221121272, invalidated_rkey = 221121272}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 48, slid = 0, sl = 48 '0', dlid_path_bits = 0 '\000'}, {wr_id = 35184797909760, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 58752, slid = 56265, sl = 129 '\201', dlid_path_bits = 224 '\340'}, {wr_id = 35184438145536, status = 3542078272, opcode = 32767, vendor_err = 0, byte_len = 0, {imm_data = 66056704, invalidated_rkey = 66056704}, qp_num = 8192, src_qp = 3542078272, wc_flags = 32767, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184438145536, status = 3542077240, opcode = 32767, vendor_err = 3542077248, byte_len = 32767, {imm_data = 3542077256, invalidated_rkey = 3542077256}, qp_num = 32767, src_qp = 411, wc_flags = 0, pkey_index = 48, slid = 0, sl = 48 '0', dlid_path_bits = 0 '\000'}, {wr_id = 35184797910072, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 1595635712, invalidated_rkey = 1595635712}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 60760, slid = 65535, sl = 255 '\377', dlid_path_bits = 255 '\377'}, {wr_id = 35184797910176, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 319513251, invalidated_rkey = 319513251}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 6, slid = 4876, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184797910280, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 16629776, invalidated_rkey = 16629776}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 13536, slid = 247, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184797910384, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 343076864, invalidated_rkey = 343076864}, qp_num = 411, src_qp = 411, wc_flags = 0, pkey_index = 38096, slid = 4708, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184797910488, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 0, byte_len = 0, {imm_data = 16294288, invalidated_rkey = 16294288}, qp_num = 4294967295, src_qp = 3542077904, wc_flags = 32767, pkey_index = 36564, slid = 4863, sl = 126 '~', dlid_path_bits = 127 '\177'}} 00:28:30.716 rdma_wr = 0x2000196185d8 00:28:30.716 rdma_req = 0x800000001 00:28:30.716 rdma_recv = 0x200019618588 00:28:30.716 rqpair = 0x10c8310 00:28:30.716 tmp_rqpair = 0x1 00:28:30.716 reaped = 0 00:28:30.716 i = 0 00:28:30.716 count = 0 00:28:30.716 rc = 98 00:28:30.716 error = false 00:28:30.716 poll_tsc = 3624548067360470 00:28:30.716 __func__ = "nvmf_rdma_poller_poll" 00:28:30.716 __PRETTY_FUNCTION__ = "nvmf_rdma_poller_poll" 00:28:30.716 #3 0x00007f7e13aefe42 in nvmf_rdma_poll_group_poll (group=0xf996c0) at rdma.c:4859 00:28:30.716 rtransport = 0xf8adc0 00:28:30.716 rgroup = 0xf996c0 00:28:30.716 rpoller = 0xf99730 00:28:30.716 tmp = 0xf9a9b0 00:28:30.716 count = 0 00:28:30.716 rc = 0 00:28:30.716 rc2 = 0 00:28:30.716 #4 0x00007f7e13a3237b in nvmf_transport_poll_group_poll (group=0xf996c0) at transport.c:728 00:28:30.716 No locals. 00:28:30.716 #5 0x00007f7e139eae3d in nvmf_poll_group_poll (ctx=0xf738d0) at nvmf.c:157 00:28:30.716 group = 0xf738d0 00:28:30.716 rc = 0 00:28:30.716 count = 0 00:28:30.716 tgroup = 0xf996c0 00:28:30.716 #6 0x00007f7e130bcf3e in thread_execute_poller (thread=0xf734e0, poller=0xf73990) at thread.c:959 00:28:30.716 rc = 0 00:28:30.716 __PRETTY_FUNCTION__ = "thread_execute_poller" 00:28:30.716 __func__ = "thread_execute_poller" 00:28:30.716 #7 0x00007f7e130becc4 in thread_poll (thread=0xf734e0, max_msgs=0, now=3624548067366118) at thread.c:1085 00:28:30.716 poller_rc = 0 00:28:30.716 msg_count = 0 00:28:30.716 poller = 0xf73990 00:28:30.716 tmp = 0x0 00:28:30.716 critical_msg = 0x0 00:28:30.716 rc = 0 00:28:30.716 #8 0x00007f7e130bf963 in spdk_thread_poll (thread=0xf734e0, max_msgs=0, now=3624548067366118) at thread.c:1173 00:28:30.716 orig_thread = 0x0 00:28:30.716 rc = 0 00:28:30.716 #9 0x00007f7e134853d5 in _reactor_run (reactor=0xf67500) at reactor.c:906 00:28:30.716 thread = 0xf734e0 00:28:30.716 lw_thread = 0xf73828 00:28:30.716 tmp = 0x0 00:28:30.716 now = 3624548067366118 00:28:30.716 rc = 0 00:28:30.716 #10 0x00007f7e13485a25 in reactor_run (arg=0xf67500) at reactor.c:944 00:28:30.716 reactor = 0xf67500 00:28:30.716 thread = 0x7f7e12dd7cb1 00:28:30.716 lw_thread = 0x7f130c0490 00:28:30.716 tmp = 0x7fffd31fd450 00:28:30.716 thread_name = "reactor_1\000\000\000\200\000\000\000\200\324\037\323\377\177\000\000S\370\326\023\001\000\000" 00:28:30.716 last_sched = 0 00:28:30.716 __func__ = "reactor_run" 00:28:30.716 #11 0x00007f7e13486532 in spdk_reactors_start () at reactor.c:1060 00:28:30.716 reactor = 0xf67500 00:28:30.716 i = 4294967295 00:28:30.716 current_core = 1 00:28:30.716 rc = 0 00:28:30.716 __func__ = "spdk_reactors_start" 00:28:30.716 __PRETTY_FUNCTION__ = "spdk_reactors_start" 00:28:30.716 #12 0x00007f7e1347a97b in spdk_app_start (opts_user=0x7fffd31fd7f0, start_fn=0x4023df , arg1=0x0) at app.c:980 00:28:30.716 rc = 0 00:28:30.716 tty = 0x0 00:28:30.716 tmp_cpumask = {str = '\000' , cpus = "\002", '\000' } 00:28:30.716 g_env_was_setup = false 00:28:30.716 opts_local = {name = 0x42d01d "nvmf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7f7e1349223c "/var/tmp/spdk.sock", reactor_mask = 0x7fffd31ff02b "0x1E", tpoint_group_mask = 0x7fffd31ff021 "0xFFFF", shm_id = 0, reserved52 = "\000\000\000", shutdown_cb = 0x0, enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_INFO, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 252, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 262143, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0} 00:28:30.716 opts = 0x7fffd31fd510 00:28:30.716 i = 128 00:28:30.716 core = 4294967295 00:28:30.716 __func__ = "spdk_app_start" 00:28:30.716 #13 0x0000000000402566 in main (argc=7, argv=0x7fffd31fda28) at nvmf_main.c:47 00:28:30.716 rc = 1 00:28:30.716 opts = {name = 0x42d01d "nvmf", json_config_file = 0x0, json_config_ignore_errors = false, reserved17 = "\000\000\000\000\000\000", rpc_addr = 0x7f7e1349223c "/var/tmp/spdk.sock", reactor_mask = 0x7fffd31ff02b "0x1E", tpoint_group_mask = 0x7fffd31ff021 "0xFFFF", shm_id = 0, reserved52 = "\000\000\000", shutdown_cb = 0x0, enable_coredump = true, reserved65 = "\000\000", mem_channel = -1, main_core = -1, mem_size = -1, no_pci = false, hugepage_single_segments = false, unlink_hugepage = false, no_huge = false, reserved84 = "\000\000\000", hugedir = 0x0, print_level = SPDK_LOG_INFO, reserved100 = "\000\000\000", num_pci_addr = 0, pci_blocked = 0x0, pci_allowed = 0x0, iova_mode = 0x0, delay_subsystem_init = false, reserved137 = "\000\000\000\000\000\000", num_entries = 32768, env_context = 0x0, log = 0x0, base_virtaddr = 35184372088832, opts_size = 252, disable_signal_handlers = false, interrupt_mode = false, reserved186 = "\000\000\000\000\000", msg_mempool_size = 0, rpc_allowlist = 0x0, vf_token = 0x0, lcore_map = 0x0, rpc_log_level = SPDK_LOG_DISABLED, rpc_log_file = 0x0, json_data = 0x0, json_data_size = 0} 00:28:30.716 00:28:30.716 Thread 1 (Thread 0x7f7e100006c0 (LWP 1462582)): 00:28:30.716 #0 0x00007f7e125c3884 in __pthread_kill_implementation () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #1 0x00007f7e12572afe in raise () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #2 0x00007f7e1255b87f in abort () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #3 0x00007f7e1255b79b in __assert_fail_base.cold () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #4 0x00007f7e1256b187 in __assert_fail () from /usr/lib64/libc.so.6 00:28:30.716 No symbol table info available. 00:28:30.716 #5 0x00007f7e13aee676 in nvmf_rdma_poller_poll (rtransport=0xf8adc0, rpoller=0x7f7e0800eee0) at rdma.c:4722 00:28:30.717 wc = {{wr_id = 35184791853648, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 120585728, invalidated_rkey = 120585728}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791864073, status = IBV_WC_SUCCESS, opcode = IBV_WC_RDMA_WRITE, vendor_err = 65537, byte_len = 65536, {imm_data = 120585728, invalidated_rkey = 120585728}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791864074, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 8, invalidated_rkey = 8}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 43520, slid = 5039, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184791907850, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 8, byte_len = 16, {imm_data = 8, invalidated_rkey = 8}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 30720, slid = 24347, sl = 167 '\247', dlid_path_bits = 234 '\352'}, {wr_id = 35184791909218, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 136963072, invalidated_rkey = 136963072}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 58288, slid = 4095, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791910586, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 32, invalidated_rkey = 32}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 57888, slid = 4095, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791911954, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 8, invalidated_rkey = 8}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 8, slid = 0, sl = 8 '\b', dlid_path_bits = 0 '\000'}, {wr_id = 35184791913322, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 1, byte_len = 16, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 8, slid = 0, sl = 1 '\001', dlid_path_bits = 0 '\000'}, {wr_id = 35184791914690, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 268428768, invalidated_rkey = 268428768}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 65024, slid = 2895, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184791916058, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 268427768, invalidated_rkey = 268427768}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791917426, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 268428768, invalidated_rkey = 268428768}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 8, slid = 0, sl = 8 '\b', dlid_path_bits = 0 '\000'}, {wr_id = 35184791918794, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 1, byte_len = 16, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791920162, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 189791876, invalidated_rkey = 189791876}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 73, slid = 0, sl = 1 '\001', dlid_path_bits = 0 '\000'}, {wr_id = 35184791921530, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 6, invalidated_rkey = 6}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791922898, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 53927, slid = 4582, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791924266, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 111, invalidated_rkey = 111}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791925634, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 0, byte_len = 16, {imm_data = 330279424, invalidated_rkey = 330279424}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791927002, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 67294400, invalidated_rkey = 67294400}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 1168, slid = 4876, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791928370, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 54464, slid = 1026, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184791929738, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 65537, byte_len = 16, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791931106, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 0, byte_len = 16, {imm_data = 61865088, invalidated_rkey = 61865088}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 0, slid = 0, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791932474, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 330279424, byte_len = 16, {imm_data = 268429200, invalidated_rkey = 268429200}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 43520, slid = 5039, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 35184791933842, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 330279424, byte_len = 16, {imm_data = 268428184, invalidated_rkey = 268428184}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 58280, slid = 4095, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791852712, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 221968752, invalidated_rkey = 221968752}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 48, slid = 0, sl = 48 '0', dlid_path_bits = 0 '\000'}, {wr_id = 35184791852816, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 1, invalidated_rkey = 1}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 11170, slid = 56317, sl = 129 '\201', dlid_path_bits = 224 '\340'}, {wr_id = 35184791852920, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 61865088, byte_len = 8192, {imm_data = 268429392, invalidated_rkey = 268429392}, qp_num = 32638, src_qp = 0, wc_flags = 0, pkey_index = 64640, slid = 943, sl = 0 '\000', dlid_path_bits = 32 ' '}, {wr_id = 140179411036240, status = IBV_WC_SUCCESS, opcode = IBV_WC_SEND, vendor_err = 61865088, byte_len = 8192, {imm_data = 268428376, invalidated_rkey = 268428376}, qp_num = 32638, src_qp = 268428384, wc_flags = 32638, pkey_index = 58472, slid = 4095, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791853128, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 1595635712, invalidated_rkey = 1595635712}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 60760, slid = 65535, sl = 255 '\377', dlid_path_bits = 255 '\377'}, {wr_id = 35184791853232, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 319513251, invalidated_rkey = 319513251}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 6, slid = 4876, sl = 126 '~', dlid_path_bits = 127 '\177'}, {wr_id = 35184791853336, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 134552208, invalidated_rkey = 134552208}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 10800, slid = 247, sl = 0 '\000', dlid_path_bits = 0 '\000'}, {wr_id = 35184791853440, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 0, invalidated_rkey = 0}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 30720, slid = 24347, sl = 167 '\247', dlid_path_bits = 234 '\352'}, {wr_id = 35184791853544, status = IBV_WC_WR_FLUSH_ERR, opcode = IBV_WC_RECV, vendor_err = 65537, byte_len = 0, {imm_data = 134827632, invalidated_rkey = 134827632}, qp_num = 414, src_qp = 414, wc_flags = 0, pkey_index = 53200, slid = 54047, sl = 255 '\377', dlid_path_bits = 127 '\177'}} 00:28:30.717 rdma_wr = 0x200019054309 00:28:30.717 rdma_req = 0x200019054000 00:28:30.717 rdma_recv = 0x200019051a00 00:28:30.717 rqpair = 0x10c8710 00:28:30.717 tmp_rqpair = 0x7f7e00010000 00:28:30.717 reaped = 23 00:28:30.717 i = 1 00:28:30.717 count = 0 00:28:30.717 rc = 32638 00:28:30.717 error = true 00:28:30.717 poll_tsc = 3624548067234444 00:28:30.717 __func__ = "nvmf_rdma_poller_poll" 00:28:30.717 __PRETTY_FUNCTION__ = "nvmf_rdma_poller_poll" 00:28:30.717 #6 0x00007f7e13aefe42 in nvmf_rdma_poll_group_poll (group=0x7f7e0800ee70) at rdma.c:4859 00:28:30.717 rtransport = 0xf8adc0 00:28:30.717 rgroup = 0x7f7e0800ee70 00:28:30.717 rpoller = 0x7f7e0800eee0 00:28:30.717 tmp = 0x7f7e08051260 00:28:30.717 count = 0 00:28:30.717 rc = 0 00:28:30.717 rc2 = 0 00:28:30.717 #7 0x00007f7e13a3237b in nvmf_transport_poll_group_poll (group=0x7f7e0800ee70) at transport.c:728 00:28:30.717 No locals. 00:28:30.717 #8 0x00007f7e139eae3d in nvmf_poll_group_poll (ctx=0x7f7e08000c00) at nvmf.c:157 00:28:30.717 group = 0x7f7e08000c00 00:28:30.717 rc = -1 00:28:30.717 count = 0 00:28:30.717 tgroup = 0x7f7e0800ee70 00:28:30.717 #9 0x00007f7e130bcf3e in thread_execute_poller (thread=0xf72a30, poller=0x7f7e08000cc0) at thread.c:959 00:28:30.717 rc = 0 00:28:30.717 __PRETTY_FUNCTION__ = "thread_execute_poller" 00:28:30.717 __func__ = "thread_execute_poller" 00:28:30.717 #10 0x00007f7e130becc4 in thread_poll (thread=0xf72a30, max_msgs=0, now=3624548067221634) at thread.c:1085 00:28:30.717 poller_rc = 0 00:28:30.717 msg_count = 0 00:28:30.717 poller = 0x7f7e08000cc0 00:28:30.717 tmp = 0x0 00:28:30.717 critical_msg = 0x0 00:28:30.717 rc = 0 00:28:30.717 #11 0x00007f7e130bf963 in spdk_thread_poll (thread=0xf72a30, max_msgs=0, now=3624548067221634) at thread.c:1173 00:28:30.717 orig_thread = 0x0 00:28:30.717 rc = 0 00:28:30.717 #12 0x00007f7e134853d5 in _reactor_run (reactor=0xf677c0) at reactor.c:906 00:28:30.717 thread = 0xf72a30 00:28:30.717 lw_thread = 0xf72d78 00:28:30.717 tmp = 0x0 00:28:30.717 now = 3624548067221634 00:28:30.717 rc = 1 00:28:30.717 #13 0x00007f7e13485a25 in reactor_run (arg=0xf677c0) at reactor.c:944 00:28:30.717 reactor = 0xf677c0 00:28:30.717 thread = 0x7f7e126361c1 00:28:30.717 lw_thread = 0x0 00:28:30.717 tmp = 0x0 00:28:30.717 thread_name = "reactor_2\000\000\000\000\000\000\000\203\351\377\017~\177\000\000\001\000\000\000\000\000\000" 00:28:30.717 last_sched = 0 00:28:30.717 __func__ = "reactor_run" 00:28:30.717 #14 0x00007f7e12de7925 in eal_thread_loop (arg=0x2) at ../lib/eal/common/eal_common_thread.c:212 00:28:30.717 f = 0x7f7e134857b4 00:28:30.717 fct_arg = 0xf677c0 00:28:30.717 lcore_id = 2 00:28:30.717 cpuset = "2", '\000' 00:28:30.717 ret = 0 00:28:30.717 #15 0x00007f7e12e02c5e in eal_worker_thread_loop (arg=0x2) at ../lib/eal/linux/eal.c:916 00:28:30.717 No locals. 00:28:30.717 #16 0x00007f7e125c1947 in start_thread () from /usr/lib64/libc.so.6 00:28:30.717 No symbol table info available. 00:28:30.717 #17 0x00007f7e12647970 in clone3 () from /usr/lib64/libc.so.6 00:28:30.717 No symbol table info available. 00:28:30.717 00:28:30.717 -- 00:28:34.910 INFO: APP EXITING 00:28:34.910 INFO: killing all VMs 00:28:34.910 INFO: killing vhost app 00:28:34.910 INFO: EXIT DONE 00:28:37.447 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:37.447 Waiting for block devices as requested 00:28:37.447 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:37.447 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:37.447 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:37.447 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:37.709 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:37.709 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:37.709 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:37.709 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:37.969 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:37.969 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:37.969 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:37.969 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:38.239 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:38.239 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:38.239 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:38.239 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:38.500 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:45.070 0000:5f:00.0 (1b96 2600): Skipping denied controller at 0000:5f:00.0 00:28:45.329 Cleaning 00:28:45.329 Removing: /var/run/dpdk/spdk0/config 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:45.329 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:45.329 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:45.329 Removing: /var/run/dpdk/spdk0/mp_socket 00:28:45.329 Removing: /var/run/dpdk/spdk1/config 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:45.329 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:45.329 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:45.588 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:45.588 Removing: /var/run/dpdk/spdk2/config 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:45.588 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:45.588 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:45.588 Removing: /var/run/dpdk/spdk3/config 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:45.588 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:45.588 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:45.588 Removing: /var/run/dpdk/spdk4/config 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:45.588 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:45.588 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:45.588 Removing: /dev/shm/bdevperf_trace.pid1462857 00:28:45.588 Removing: /dev/shm/bdev_svc_trace.1 00:28:45.588 Removing: /dev/shm/nvmf_trace.0 00:28:45.588 Removing: /dev/shm/spdk_tgt_trace.pid1144959 00:28:45.588 Removing: /var/tmp/spdk_cpu_lock_000 00:28:45.588 Removing: /var/tmp/spdk_cpu_lock_001 00:28:45.588 Removing: /var/tmp/spdk_cpu_lock_002 00:28:45.588 Removing: /var/tmp/spdk_cpu_lock_003 00:28:45.588 Removing: /var/tmp/spdk_cpu_lock_004 00:28:45.588 Removing: /var/run/dpdk/spdk0 00:28:45.588 Removing: /var/run/dpdk/spdk1 00:28:45.588 Removing: /var/run/dpdk/spdk2 00:28:45.588 Removing: /var/run/dpdk/spdk3 00:28:45.588 Removing: /var/run/dpdk/spdk4 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1142865 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1143895 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1144959 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1145576 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1146516 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1146751 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1147710 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1147939 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1148268 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1152758 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1154012 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1154289 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1154613 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1155040 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1155371 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1155589 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1155773 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1156043 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1156883 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1159838 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1160102 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1160355 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1160584 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1161072 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1161112 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1161573 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1161796 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1162049 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1162223 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1162328 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1162540 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1162884 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1163128 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1163415 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1163683 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1163831 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1163987 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1164231 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1164476 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1164728 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1164970 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1165218 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1165466 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1165710 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1165954 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1166205 00:28:45.588 Removing: /var/run/dpdk/spdk_pid1166447 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1166697 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1166946 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1167190 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1167439 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1167687 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1167932 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1168183 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1168430 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1168675 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1168926 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1169203 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1169509 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1173165 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1254886 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1258752 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1268998 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1273872 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1277145 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1278056 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1291499 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1291746 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1295646 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1301197 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1303832 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1313730 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1336236 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1339517 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1385284 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1390738 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1420304 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1435984 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1460916 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1461785 00:28:45.847 Removing: /var/run/dpdk/spdk_pid1462857 00:28:45.847 Clean 00:31:22.322 09:09:36 nvmf_rdma -- common/autotest_common.sh@1450 -- # return 1 00:31:22.322 09:09:36 nvmf_rdma -- common/autotest_common.sh@1 -- # : 00:31:22.322 09:09:36 nvmf_rdma -- common/autotest_common.sh@1 -- # exit 1 00:31:22.335 [Pipeline] } 00:31:22.357 [Pipeline] // stage 00:31:22.364 [Pipeline] } 00:31:22.388 [Pipeline] // timeout 00:31:22.395 [Pipeline] } 00:31:22.400 ERROR: script returned exit code 1 00:31:22.400 Setting overall build result to FAILURE 00:31:22.419 [Pipeline] // catchError 00:31:22.425 [Pipeline] } 00:31:22.444 [Pipeline] // wrap 00:31:22.451 [Pipeline] } 00:31:22.470 [Pipeline] // catchError 00:31:22.480 [Pipeline] stage 00:31:22.482 [Pipeline] { (Epilogue) 00:31:22.497 [Pipeline] catchError 00:31:22.499 [Pipeline] { 00:31:22.515 [Pipeline] echo 00:31:22.517 Cleanup processes 00:31:22.523 [Pipeline] sh 00:31:22.810 + sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:31:22.810 1500311 sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:31:22.825 [Pipeline] sh 00:31:23.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-cvl-phy-autotest/spdk 00:31:23.113 ++ grep -v 'sudo pgrep' 00:31:23.113 ++ awk '{print $1}' 00:31:23.113 + sudo kill -9 00:31:23.113 + true 00:31:23.124 [Pipeline] sh 00:31:23.472 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:28.775 [Pipeline] sh 00:31:29.059 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:29.059 Artifacts sizes are good 00:31:29.072 [Pipeline] archiveArtifacts 00:31:29.078 Archiving artifacts 00:31:30.314 [Pipeline] sh 00:31:30.605 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-cvl-phy-autotest 00:31:30.663 [Pipeline] cleanWs 00:31:30.670 [WS-CLEANUP] Deleting project workspace... 00:31:30.670 [WS-CLEANUP] Deferred wipeout is used... 00:31:30.675 [WS-CLEANUP] done 00:31:30.676 [Pipeline] } 00:31:30.690 [Pipeline] // catchError 00:31:30.696 [Pipeline] echo 00:31:30.698 Tests finished with errors. Please check the logs for more info. 00:31:30.700 [Pipeline] echo 00:31:30.701 Execution node will be rebooted. 00:31:30.710 [Pipeline] build 00:31:30.712 Scheduling project: reset-job 00:31:30.719 [Pipeline] sh 00:31:30.995 + logger -p user.info -t JENKINS-CI 00:31:31.004 [Pipeline] } 00:31:31.021 [Pipeline] // stage 00:31:31.027 [Pipeline] } 00:31:31.045 [Pipeline] // node 00:31:31.052 [Pipeline] End of Pipeline 00:31:31.099 Finished: FAILURE